I've been trying to get my documentation in order for an open source project I'm working on, which involves a mirrored client and server API. To this end I have created a decorator that can most of the time be used to document a method that simply performs validation on its input. You can find a class full of these methods here and the decorator's implementation here.
The decorator, as you can see uses functools.wraps to preserve the docstring, and I thought also the signature, however the source code vs the generated documentation looks like this:
Source:
vs
Docs:
Does anyone know any way to have setH's generated documentation show the correct call signature? (without having a new decorator for each signature - there are hudreds of methods I need to mirror)
I've found a workaround which involved having the decorator not changing the unbound method, but having the class mutate the method at binding time (object instantiation) - this seems like a hack though, so any comments on this, or alternative ways of doing this, would be appreciated.
In PRAW, I handled this issue by having conditional decorators that return the original function (rather than the decorated function) when a sphinx build is occurring.
In PRAW's sphinx conf.py I added the following as a way to determine if SPHINX is currently building:
import os
os.environ['SPHINX_BUILD'] = '1'
And then in PRAW, its decorators look like:
import os
# Don't decorate functions when building the documentation
IS_SPHINX_BUILD = bool(os.getenv('SPHINX_BUILD'))
def limit_chars(function):
"""Truncate the string returned from a function and return the result."""
#wraps(function)
def wrapped(self, *args, **kwargs):
output_string = function(self, *args, **kwargs)
if len(output_string) > MAX_CHARS:
output_string = output_string[:MAX_CHARS - 3] + '...'
return output_string
return function if IS_SPHINX_BUILD else wrapped
The return function if IS_SPHINX_BUILD else wrapped line is what allows SPHINX to pick up the correct signature.
Relevant Source
I'd like to avoid reliance on too muck outside of the standard library, so while I have looked at the Decorator module, I have mainly tried to reproduce its functionality.... Unsuccessfully...
So I took a look at the problem from another angle, and now I have a partially working solution, which can mainly be described by just looking at this commit. It's not perfect as it relies on using partial, which clobbers the help in the REPL. The idea is that instead of replacing the function to which the decorator is applied, it is augmented with attributes.
+def s_repr(obj):
+ """ :param obj: object """
+ return (repr(obj) if not isinstance(obj, SikuliClass)
+ else "self._get_jython_object(%r)" % obj._str_get)
+
+
def run_on_remote(func):
...
- func.s_repr = lambda obj: (repr(obj)
- if not isinstance(obj, SikuliClass) else
- "self._get_jython_object(%r)" % obj._str_get)
-
- def _inner(self, *args):
- return self.remote._eval("self._get_jython_object(%r).%s(%s)" % (
- self._id,
- func.__name__,
- ', '.join([func.s_repr(x) for x in args])))
-
- func.func = _inner
+ gjo = "self._get_jython_object"
+ func._augment = {
+ 'inner': lambda self, *args: (self.remote._eval("%s(%r).%s(%s)"
+ % (gjo, self._id, func.__name__,
+ ', '.join([s_repr(x)for x in args]))))
+ }
#wraps(func)
def _outer(self, *args, **kwargs):
func(self, *args, **kwargs)
- if hasattr(func, "arg"):
- args, kwargs = func.arg(*args, **kwargs), {}
- result = func.func(*args, **kwargs)
- if hasattr(func, "post"):
+ if "arg" in func._augment:
+ args, kwargs = func._augment["arg"](self, *args, **kwargs), {}
+ result = func._augment['inner'](self, *args, **kwargs)
+ if "post" in func._augment:
return func.post(result)
else:
return result
def _arg(arg_func):
- func.arg = arg_func
- return _outer
+ func._augment['arg'] = arg_func
+ return func
def _post(post_func):
- func.post = post_func
- return _outer
+ func._augment['post'] = post_func
+ return func
def _func(func_func):
- func.func = func_func
- return _outer
- _outer.arg = _arg
- _outer.post = _post
- _outer.func = _func
- return _outer
+ func._augment['inner'] = func_func
+ return func
+
+ func.arg = _outer.arg = _arg
+ func.post = _outer.post = _post
+ func.func = _outer.func = _func
+ func.run = _outer.run = _outer
+ return func
So this doesn't actually change the unbound method, ergo the generated documentation stays the same. The second part of the trickery occurs at class initialisation.
class ClientSikuliClass(ServerSikuliClass):
""" Base class for types based on the Sikuli native types """
...
def __init__(self, remote, server_id, *args, **kwargs):
"""
:type server_id: int
:type remote: SikuliClient
"""
super(ClientSikuliClass, self).__init__(None)
+ for key in dir(self):
+ try:
+ func = getattr(self, key)
+ except AttributeError:
+ pass
+ else:
+ try:
+ from functools import partial, wraps
+ run = wraps(func.run)(partial(func.run, self))
+ setattr(self, key, run)
+ except AttributeError:
+ pass
self.remote = remote
self.server_id = server_id
So at the point where an instance of any class inheriting ClientSikuliClass is instantiated, an attempt is made to take the run property of each attribute of that instance and make that what is returned on attempting to get that attribute, and so the bound method
is now a partially applied _outer function.
So the issues with this are multiple:
Using partial at initilaisation results in losing the bound method information.
I worry about clobbering attributes that just so happen to have a run attribute...
So while I have an answer to my own question, I'm not quite satisfied by it.
Update
Ok so after a bit more work I ended up with this:
class ClientSikuliClass(ServerSikuliClass):
""" Base class for types based on the Sikuli native types """
...
def __init__(self, remote, server_id, *args, **kwargs):
"""
:type server_id: int
:type remote: SikuliClient
"""
super(ClientSikuliClass, self).__init__(None)
- for key in dir(self):
+
+ def _apply_key(key):
try:
func = getattr(self, key)
+ aug = func._augment
+ runner = func.run
except AttributeError:
- pass
- else:
- try:
- from functools import partial, wraps
- run = wraps(func.run)(partial(func.run, self))
- setattr(self, key, run)
- except AttributeError:
- pass
+ return
+
+ #wraps(func)
+ def _outer(*args, **kwargs):
+ return runner(self, *args, **kwargs)
+
+ setattr(self, key, _outer)
+
+ for key in dir(self):
+ _apply_key(key)
+
self.remote = remote
self.server_id = server_id
This prevents the loss of the documentation on the object. You'll also see that the func._augment attribute is accessed, even though it is not used, so that if it does not exist the object attribute will not be touched.
I'd be interested if anyone had any comments on this?
functools.wraps only preserves __name__,__doc__, and __module__. To preserve the signature as well take a look at Michele Simionato's Decorator module.
To expand my short comment on Ethan's answer, here is my original code using the functools package:
import functools
def trace(f):
"""The trace decorator."""
logger = ... # some code to determine the right logger
where = ... # some code to create a string describing where we are
#functools.wraps(f)
def _trace(*args, **kwargs):
logger.debug("Entering %s", where)
result = f(*args, **kwargs)
logger.debug("Leaving %s", where)
return result
return _trace
and here the code using the decorator package:
import decorator
def trace(f):
"""The trace decorator."""
logger = ... # some code to determine the right logger
where = ... # some code to create a string describing where we are
def _trace(f, *args, **kwargs):
logger.debug("Entering %s", where)
result = f(*args, **kwargs)
logger.debug("Leaving %s", where)
return result
return decorator.decorate(f, _trace)
We wanted to move the code to determine the right logger and where-string out of the actual function wrapper, for performance reasons. Hence the approach with the nested wrapper function, in both versions.
Both versions of the code work on Python 2 and Python 3, but the second version creates the correct prototypes for the decorated functions when using Sphinx & autodoc (without having to repeat the prototype in the autodoc statements, as suggested in this answer).
This is with cPython, I did not try Jython etc.
Related
I'd appreciate some help with the following code, as I'm still relatively new to Python, and despite countless days trying to figure out where i'm going wrong, i cant seem to spot the error i'm making.
I've adapted the following code from an article on medium to create a logging decorator and then enhanced it to try and "redact pandas df and dictionary" from the logs. Using functools caused me a problem with pytest and pytest fixtures. A post on stack overflow suggested dropping functools in favour of decorators.
def log_decorator(_func=None):
def log_decorator_info(func):
def log_decorator_wrapper(*args, **kwargs):
_logger = Logger()
logger_obj = _logger.get_logger()
args_passed_in_function = args_excl_df_dict(*args)
kwargs_passed_in_function = kwargs_excl_df_dict(**kwargs)
formatted_arguments = join_args_kwargs(args_passed_in_function,kwargs_passed_in_function)
py_file_caller = getframeinfo(stack()[1][0])
extra_args = { 'func_name_override': func.__name__,'file_name_override': os.path.basename(py_file_caller.filename) }
""" Before to the function execution, log function details."""
logger_obj.info(f"Begin function - Arguments: {formatted_arguments}", extra=extra_args)
try:
""" log return value from the function """
args_returned_from_function = args_excl_df_dict(func(*args))
kwargs_returned_from_function = []
formatted_arguments = join_args_kwargs(args_returned_from_function,kwargs_returned_from_function)
logger_obj.info(f"End function - Returned: {formatted_arguments}", extra=extra_args)
except:
"""log exception if occurs in function"""
error_raised = str(sys.exc_info()[1])
logger_obj.error(f"Exception: {str(sys.exc_info()[1])}",extra=extra_args)
msg_to_send = f"{func.__name__} {error_raised}"
send_alert(APP_NAME,msg_to_send,'error')
raise
return func(*args, **kwargs)
return decorator.decorator(log_decorator_wrapper, func)
if _func is None:
return log_decorator_info
else:
return log_decorator_info(_func)
Having adapted the above code i cant figure out what is causing the following error
args_returned_from_function = args_excl_df_dict(func(*args))
TypeError: test_me() takes 4 positional arguments but 5 were given
Other functions which the log decorator relies on
def args_excl_df_dict(*args):
args_list = []
for a in args:
if isinstance(a,(pd.DataFrame,dict)):
a = 'redacted from log'
args_list.append(repr(a))
else:
args_list.append(repr(a))
return args_list
def kwargs_excl_df_dict(**kwargs):
kwargs_list = []
for k, v in kwargs.items():
if isinstance(v,(dict,pd.DataFrame)):
v = 'redacted from log'
kwargs_list.append(f"{k}={v!r}")
else:
kwargs_list.append(f"{k}={v!r}")
return kwargs_list
def join_args_kwargs(args,kwargs):
formatted_arguments = ", ".join(args + kwargs)
return str(formatted_arguments)
This is the code calling the decorator
#log_decorator.log_decorator()
def test_me(a, b, c, d):
return a, b
test_me(string, number, dictionary, pandas_df)
I think the problem is that the wrapper is including the function as an argument to the function.
Try adding this line and see if it helps
args = args[1:]
intor your log_decorator_wrapper function towards the top. Like this.
def log_decorator(_func=None):
def log_decorator_info(func):
def log_decorator_wrapper(*args, **kwargs):
args = args[1:] # < -------------------here
_logger = Logger()
logger_obj = _logger.get_logger()
args_passed_in_function = args_excl_df_dict(*args)
kwargs_passed_in_function = kwargs_excl_df_dict(**kwargs)
formatted_arguments = join_args_kwargs(args_passed_in_function,kwargs_passed_in_function)
py_file_caller = getframeinfo(stack()[1][0])
extra_args = { 'func_name_override': func.__name__,'file_name_override': os.path.basename(py_file_caller.filename) }
""" Before to the function execution, log function details."""
logger_obj.info(f"Begin function - Arguments: {formatted_arguments}", extra=extra_args)
try:
""" log return value from the function """
args_returned_from_function = args_excl_df_dict(func(*args))
kwargs_returned_from_function = []
formatted_arguments = join_args_kwargs(args_returned_from_function,kwargs_returned_from_function)
logger_obj.info(f"End function - Returned: {formatted_arguments}", extra=extra_args)
except:
"""log exception if occurs in function"""
error_raised = str(sys.exc_info()[1])
logger_obj.error(f"Exception: {str(sys.exc_info()[1])}",extra=extra_args)
msg_to_send = f"{func.__name__} {error_raised}"
send_alert(APP_NAME,msg_to_send,'error')
raise
return func(*args, **kwargs)
return decorator.decorator(log_decorator_wrapper, func)
if _func is None:
return log_decorator_info
else:
return log_decorator_info(_func)
If your code is as is in your editor, maybe look at the indentation on the first three functions. Then start from there to move down
I have a get(bid, mid, pid) function. It is decorated with lru_cache. I want to drop all cache entries with bid == 105, for example.
I was thinking of some closures, that return decorated functions. Then I get some separate caches for each bid entry, and non cached function with dict of these closures that acts like a router. but maybe there is a more pythonic way for this?
upd: i came up with somthing like this, and it seems to work
getters = {}
def facade(bid, mid, pid):
global getters # not very good, better to use class
if not bid in getters:
def create_getter(bid):
#functools.lru_cache(maxsize=None)
def get(mid, pid):
print ('cache miss')
return bid + mid + pid
return get
getters[bid] = create_getter(bid)
return getters[bid](mid, pid)
val = facade(bid, mid, pid) # ability to read like before
if need_to_drop:
getters[bid].cache_clear() # ability to flush entries with specified bid
Maybe wrap functools.lru_cache and filter parameters?
from functools import lru_cache
def filtered_lru(filter_func: callable, maxsize: int):
def wrapper(f):
cached = lru_cache(maxsize=maxsize)(f)
def wrapped(*args, **kwargs):
if filter_func(*args, **kwargs):
print('Using cache')
return cached(*args, **kwargs)
else:
print('Not using cache')
return f(*args, **kwargs)
return wrapped
return wrapper
def _get_filter(*args, **kwargs):
return args[0] != 0
#filtered_lru(_get_filter, maxsize=100)
def get(num):
print('Calculating...')
return 2 * num
if __name__ == '__main__':
print(get(1))
print(get(1))
print(get(1))
print(get(0))
print(get(0))
output:
Using cache
Calculating...
2
Using cache
2
Using cache
2
Not using cache
Calculating...
0
Not using cache
Calculating...
0
The below example is taken from python cookbook 3rd edition section 9.5.
I placed break points at each line to understand the flow of execution . Below is the code sample, its output and the questions I have . I have tried to explain my question , let me know if you need further info.
from functools import wraps, partial
import logging
# Utility decorator to attach a function as an attribute of obj
def attach_wrapper(obj, func=None):
if func is None:
return partial(attach_wrapper, obj)
setattr(obj, func.__name__, func)
return func
def logged(level, name=None, message=None):
def decorate(func):
logname = name if name else func.__module__
log = logging.getLogger(logname)
logmsg = message if message else func.__name__
#wraps(func)
def wrapper(*args, **kwargs):
log.log(level, logmsg)
return func(*args, **kwargs)
#attach_wrapper(wrapper)
def set_message(newmsg):
nonlocal logmsg
logmsg = newmsg
return wrapper
return decorate
# Example use
#logged(logging.DEBUG)
def add(x, y):
return x + y
logging.basicConfig(level=logging.DEBUG)
add.set_message('Add called')
#add.set_level(logging.WARNING)
print (add(2, 3))
output is
DEBUG:__main__:Add called
5
I understand the concept of decorators, but this is confusing a little.
scenario 1. When the following line is debugged #logged(logging.DEBUG) , we get
decorate = .decorate at 0x000000000< memoryaddress >>
Question : why would the control go back to execute the function " def decorate" ? Is it because the "decorate" function is on the top of the stack ?
scenario 2 :When executing #attach_wrapper(wrapper) , the control goes to execute attach_wrapper(obj, func=None) and partial function returns
func =
question : why would the control go back to execute def attach_wrapper(obj, func=None):
and how would this time the value for func is *.decorate..set_message at 0x000000000 >
being passed to the attach_wrapper ?
Scenario 1
This:
#logged(logging.DEBUG)
def add(x, y):
....
is the same as this:
def add(x, y):
....
add = logged(logging.DEBUG)(add)
Note that there are two calls there: first logged(logging.DEBUG) returns decorate and then decorate(add) is called.
Scenario 2
Same as in Scenario 1, this:
#attach_wrapper(wrapper)
def set_message(newmsg):
...
is the same as this:
def set_message(newmsg):
...
set_message = attach_wrapper(wrapper)(set_message)
Again, there are two calls: first attach_wrapper(wrapper) returns the partial object and then partial(set_message) is called.
In other words...
logged and attach_wrapper are not decorators. Those are functions which return decorators. That is why two calls are made: one to the function which returns the decorator and another the the decorator itself.
I have the following code, in which I simply have a decorator for caching a function's results, and as a concrete implementation, I used the Fibonacci function.
After playing around with the code, I wanted to print the cache variable, that's initiated in the cache wrapper.
(It's not because I suspect the cache might be faulty, I simply want to know how to access it without going into debug mode and put a breakpoint inside the decorator)
I tried to explore the fib_w_cache function in debug mode, which is supposed to actually be the wrapped fib_w_cache, but with no success.
import timeit
def cache(f, cache = dict()):
def args_to_str(*args, **kwargs):
return str(args) + str(kwargs)
def wrapper(*args, **kwargs):
args_str = args_to_str(*args, **kwargs)
if args_str in cache:
#print("cache used for: %s" % args_str)
return cache[args_str]
else:
val = f(*args, **kwargs)
cache[args_str] = val
return val
return wrapper
#cache
def fib_w_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_w_cache(n-2) + fib_w_cache(n-1)
def fib_wo_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_wo_cache(n-1) + fib_wo_cache(n-2)
print(timeit.timeit('[fib_wo_cache(i) for i in range(0,30)]', globals=globals(), number=1))
print(timeit.timeit('[fib_w_cache(i) for i in range(0,30)]', globals=globals(), number=1))
I admit this is not an "elegant" solution in a sense, but keep in mind that python functions are also objects. So with some slight modification to your code, I managed to inject the cache as an attribute of a decorated function:
import timeit
def cache(f):
def args_to_str(*args, **kwargs):
return str(args) + str(kwargs)
def wrapper(*args, **kwargs):
args_str = args_to_str(*args, **kwargs)
if args_str in wrapper._cache:
#print("cache used for: %s" % args_str)
return wrapper._cache[args_str]
else:
val = f(*args, **kwargs)
wrapper._cache[args_str] = val
return val
wrapper._cache = {}
return wrapper
#cache
def fib_w_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_w_cache(n-2) + fib_w_cache(n-1)
#cache
def fib_w_cache_1(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_w_cache(n-2) + fib_w_cache(n-1)
def fib_wo_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_wo_cache(n-1) + fib_wo_cache(n-2)
print(timeit.timeit('[fib_wo_cache(i) for i in range(0,30)]', globals=globals(), number=1))
print(timeit.timeit('[fib_w_cache(i) for i in range(0,30)]', globals=globals(), number=1))
print(fib_w_cache._cache)
print(fib_w_cache_1._cache) # to prove that caches are different instances for different functions
cache is of course a perfectly normal local variable in scope within the cache function, and a perfectly normal nonlocal cellvar in scope within the wrapper function, so if you want to access the value from there, you just do it—as you already are.
But what if you wanted to access it from somewhere else? Then there are two options.
First, cache happens to be defined at the global level, meaning any code anywhere (that hasn't hidden it with a local variable named cache) can access the function object.
And if you're trying to access the values of a function's default parameters from outside the function, they're available in the attributes of the function object. The inspect module docs explain the inspection-oriented attributes of each builtin type:
__defaults__ is a sequence of the values for all positional-or-keyword parameters, in order.
__kwdefaults__ is a mapping from keywords to values for all keyword-only parameters.
So:
>>> def f(a, b=0, c=1, *, d=2, e=3): pass
>>> f.__defaults__
(0, 1)
>>> f.__kwdefaults__
{'e': 3, 'd': 2}
So, for a simple case where you know there's exactly one default value and know which argument it belongs to, all you need is:
>>> cache.__defaults__[0]
{}
If you need to do something more complicated or dynamic, like get the default value for c in the f function above, you need to dig into other information—the only way to know that c's default value will be the second one in __defaults__ is to look at the attributes of the function's code object, like f.__code__.co_varnames, and figure it out from there. But usually, it's better to just use the inspect module's helpers. For example:
>>> inspect.signature(f).parameters['c'].default
1
>>> inspect.signature(cache).parameters['cache'].default
{}
Alternatively, if you're trying to access the cache from inside fib_w_cache, while there's no variable in lexical scope in that function body you can look at, you do know that the function body is only called by the decorator wrapper, and it is available there.
So, you can get your stack frame
frame = inspect.currentframe()
… follow it back to your caller:
back = frame.f_back
… and grab it from that frame's locals:
back.f_locals['cache']
It's worth noting that f_locals works like the locals function: it's actually a copy of the internal locals storage, so modifying it may have no effect, and that copy flattens nonlocal cell variables to regular local variables. If you wanted to access the actual cell variable, you'd have to grub around in things like back.f_code.co_freevars to get the index and then dig it out of the function object's __closure__. But usually, you don't care about that.
Just for a sake of completeness, python has caching decorator built-in in functools.lru_cache with some inspecting mechanisms:
from functools import lru_cache
#lru_cache(maxsize=None)
def fib_w_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_w_cache(n-2) + fib_w_cache(n-1)
print('fib_w_cache(10) = ', fib_w_cache(10))
print(fib_w_cache.cache_info())
Prints:
fib_w_cache(10) = 55
CacheInfo(hits=8, misses=11, maxsize=None, currsize=11)
I managed to find a solution (in some sense by #Patrick Haugh's advice).
I simply accessed cache.__defaults__[0] which holds the cache's dict.
The insights about the shared cache and how to avoid it we're also quite useful.
Just as a note, the cache dictionary can only be accessed through the cache function object. It cannot be accessed through the decorated functions (at least as far as I understand). It logically aligns well with the fact that the cache is shared in my implementation, where on the other hand, in the alternative implementation that was proposed, it is local per decorated function.
You can make a class into a wrapper.
def args_to_str(*args, **kwargs):
return str(args) + str(kwargs)
class Cache(object):
def __init__(self, func):
self.func = func
self.cache = {}
def __call__(self, *args, **kwargs):
args_str = args_to_str(*args, **kwargs)
if args_str in self.cache:
return self.cache[args_str]
else:
val = self.func(*args, **kwargs)
self.cache[args_str] = val
return val
Each function has its own cache. you can access it by calling function.cache. This also allows for any methods you wish to attach to your function.
If you wanted all decorated functions to share the same cache, you could use a class variable instead of an instance variable:
class SharedCache(object):
cache = {}
def __init__(self, func):
self.func = func
#rest of the the code is the same
#SharedCache
def function_1(stuff):
things
I tried to look for a similar question without luck. I'm quite new to python, so, please, be nice :)
I have my class, but I wanted to log when functions are executed and whit which parameters, so I wrote my decorators.
At moment I have everything in a single script, which looks more or less like:
import...
decorators...
my class...
Sincerely I don't like the decorators hanging outside of my class, I have a function to initialize the log level and a function which is never used as decorator, but it is used by the other decorators. [Code at the end of the question]
Should I put my decorators in a decorator.py file and import it in my class script? Should I leave them like that and learn to love this kind of file structure?
def initialize_log(db):
logzero.loglevel(logging.INFO)
logzero.logfile("sw-" + db + ".log")
def _log(log_function, f, *args, **kwargs):
arguments = ""
if len(args) > 1:
arguments = " ({})".format(','.join(map(str, args[1:])))
kwarguments = ""
if len(kwargs) > 0:
kwarguments = " ({})".format(','.join([str(k) + "=" + str(kwargs[k]) for k in kwargs]))
log_function(f.__name__ + " started" + arguments + kwarguments)
res = f(*args, **kwargs)
log_function(f.__name__ + " completed" + arguments + kwarguments)
return res
def log_info(f):
def _decorator(*args, **kwargs):
return _log(logger.info, f, *args, **kwargs)
return _decorator
def log_debug(f):
def _decorator(*args, **kwargs):
return _log(logger.debug, f, *args, **kwargs)
return _decorator