Python decorator variable access - python

I have a decorator function my_fun(I,k) and it is applied to a function add(x,y) as such
#my_fun(4,5)
def add(x,y): return x+y
I am new to Python would like to know if I am writing the my_fun function
How can I access x,y in the add method in my_fun?
How can I access the return value of add in the decorator function?
I am a little confused on syntax and concepts any explanation would be help.

A decorator consists of the decorator function and a function wrapper (and if you want additional arguments for the decorator another outer layer of function around it):
# Takes the arguments for the decorator and makes them accessible inside
def my_fun(decorator_argument1, decorator_argument2):
# Takes the function so that it can be wrapped.
def wrapfunc(func):
# Here we are actually going to wrap the function ... finally
def wrapper(*args, **kwargs):
# Call the function with the args and kwargs
res = func(*args, **kwargs)
# return this result
return res
# Replace the decorated function with the wrapper
return wrapper
# Return the wrapper for the function wrapper :-)
return wrapfunc
In your case if you only want to use the decorator with your function you don't need to bother with the *args, **kwargs and replace it by:
def wrapper(x, y):
# Here you can do stuff with x and y, i.e. print(x)
# Call the function with x and y
res = func(x, y)
# Here you can do stuff with the result, i.e. res = res * decorator_argument1
return res
I indicated the places where you can access x and y and the result.
If you want to predefine values for x and y a custom decorator is not the best way. You could use defaults:
def add(x=4,y=5): return x+y
add() # returns 9
add(2) # returns 7
add(5, 10) # returns 15
or if you want to fix an argument you should use functools.partial

If you're passing arguments to the decorator with #my_fun(4, 5), you need three levels of nested functions to implement the decorator in the simplest way. The outer level is the "decorator factory". It returns the middle level function, the decorator. The decorator gets called with the function it's decorating as an argument and needs to return the inner most nested function, the wrapper. The wrapper function is the one that gets called by the user.
def decorator_factory(deco_arg, deco_arg2): # name this whatever you want to use with #syntax
def decorator(func):
def wrapper(func_arg, func_arg2):
# This is a closure!
# In here you can write code using the arguments from the enclosing scpoes. e.g.:
return func(func_arg*deco_arg, func_arg2*deco_arg2) # uses args from all levels
return wrapper
return decorator
The inner functions here are closures. They can see the variables in the scope surrounding the place they were defined in, even after the functions those scope belonged to have finished running.
(Note, if you want your decorator to be able to decorate many different functions, you may want the wrapper function to accept *args and **kwargs and pass them along to func. The example above only works for functions that accept exactly two arguments. A limitation like that may be perfectly reasonable for some uses, but not always.)

Related

Why was Python decorator chaining designed to work backwards? What is the logic behind this order?

To start with, my question here is about the semantics and the logic behind why the Python language was designed like this in the case of chained decorators. Please notice the nuance how this is different from the question
How decorators chaining work?
Link: How decorators chaining work? It seems quite a number of other users had the same doubts, about the call order of chained Python decorators. It is not like I can't add a __call__ and see the order for myself. I get this, my point is, why was it designed to start from the bottom, when it comes to chained Python decorators?
E.g.
def first_func(func):
def inner():
x = func()
return x * x
return inner
def second_func(func):
def inner():
x = func()
return 2 * x
return inner
#first_func
#second_func
def num():
return 10
print(num())
Quoting the documentation on decorators:
The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:
def f(arg):
...
f = staticmethod(f)
#staticmethod
def f(arg):
...
From this it follows that the decoration in
#a
#b
#c
def fun():
...
is equivalent to
fun = a(b(c(fun)))
IOW, it was designed like that because it's just syntactic sugar.
For proof, let's just decorate an existing function and not return a new one:
def dec1(f):
print(f"dec1: got {vars(f)}")
f.dec1 = True
return f
def dec2(f):
print(f"dec2: got {vars(f)}")
f.dec2 = True
return f
#dec1
#dec2
def foo():
pass
print(f"Fully decked out: {vars(foo)}")
prints out
dec2: got {}
dec1: got {'dec2': True}
Fully decked out: {'dec2': True, 'dec1': True}
TL;DR
g(f(x)) means applying f to x first, then applying g to the output.
Omit the parentheses, add # before and line break after each function name:
#g
#f
x
(Syntax only valid if x is the definition of a function/class.)
Abstract explanation
The reasoning behind this design decision becomes fairly obvious IMHO, if you remember what the decorator syntax - in its most abstract and general form - actually means. So I am going to try the abstract approach to explain this.
It is all about syntax
To be clear here, the distinguishing factor in the concept of the "decorator" is not the object underneath it (so to speak) nor the operation it performs. It is the special syntax and the restrictions for it. Thus, a decorator at its core is nothing more than feature of Python grammar.
The decorator syntax requires a target to be decorated. Initially (see PEP 318) the target could only be function definitions; later class definitions were also allowed to be decorated (see PEP 3129).
Minimal valid syntax
Syntactically, this is valid Python:
def f(): pass
#f
class Target: pass # or `def target: pass`
However, this will (perhaps unsuprisingly) cause a TypeError upon execution. As has been reiterated multiple times here and in other posts on this platform, the above is equivalent to this:
def f(): pass
class Target: pass
Target = f(Target)
Minimal working decorator
The TypeError stems from the fact that f lacks a positional argument. This is the obvious logical restriction imposed by what a decorator is supposed to do. Thus, to achieve not only syntactically valid code, but also have it run without errors, this is sufficient:
def f(x): pass
#f
class Target: pass
This is still not very useful, but it is enough for the most general form of a working decorator.
Decoration is just application of a function to the target and assigning the output to the target's name.
Chaining functions ⇒ Chaining decorators
We can ignore the target and what it is or does and focus only on the decorator. Since it merely stands for applying a function, the order of operations comes into play, as soon as we have more than one. What is the order of operation, when we chain functions?
def f(x): pass
def g(x): pass
class Target: pass
Target = g(f(Target))
Well, just like in the composition of purely mathematical functions, this implies that we apply f to Target first and then apply g to the result of f. Despite g appearing first (i.e. further left), it is not what is applied first.
Since stacking decorators is equivalent to nesting functions, it seems obvious to define the order of operation the same way. This time, we just skip the parentheses, add an # symbol in front of the function name and a line break after it.
def f(x): pass
def g(x): pass
#g
#f
class Target: pass
But, why though?
If after the explanation above (and reading the PEPs for historic background), the reasoning behind the order of operation is still not clear or still unintuitive, there is not really any good answer left, other than "because the devs thought it made sense, so get used to it".
PS
I thought I'd add a few things for additional context based on all the comments around your question.
Decoration vs. calling a decorated function
A source of confusion seems to be the distinction between what happens when applying the decorator versus calling the decorated function.
Notice that in my examples above I never actually called target itself (the class or function being decorated). Decoration is itself a function call. Adding #f above the target is calling the f and passing the target to it as the first positional argument.
A "decorated function" might not even be a function
The distinction is very important because nowhere does it say that a decorator actually needs to return a callable (function or class). f being just a function means it can return whatever it wants. This is again valid and working Python code:
def f(x): return 3.14
#f
def target(): return "foo"
try:
target()
except Exception as e:
print(repr(e))
print(target)
Output:
TypeError("'float' object is not callable")
3.14
Notice that the name target does not even refer to a function anymore. It just holds the 3.14 returned by the decorator. Thus, we cannot even call target. The entire function behind it is essentially lost immediately before it is even available to the global namespace. That is because f just completely ignores its first positional argument x.
Replacing a function
Expanding this further, if we want, we can have f return a function. Not doing that seems very strange, considering it is used to decorate a function. But it doesn't have to be related to the target at all. Again, this is fine:
def bar(): return "bar"
def f(x): return bar
#f
def target(): return "foo"
print(target())
print(target is bar)
Output:
bar
True
It comes down to convention
The way decorators are actually overwhelmingly used out in the wild, is in a way that still keeps a reference to the target being decorated around somewhere. In practice it can be as simple as this:
def f(x):
print(f"applied `f({x.__name__})`")
return
#f
def target(): return "foo"
Just running this piece of code outputs applied f(target). Again, notice that we don't call target here, we only called f. But now, the decorated function is still target, so we could add the call print(target()) at the bottom and that would output foo after the other output produced by f.
The fact that most decorators don't just throw away their target comes down to convention. You (as a developer) would not expect your function/class to simply be thrown away completely, when you use a decorator.
Decoration with wrapping
This is why real-life decorators typically either return the reference to the target at the end outright (like in the last example) or they return a different callable, but that callable itself calls the target, meaning a reference to the target is kept in that new callable's local namespace . These functions are what is usually referred to as wrappers:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper():
print(f"wrapper executing with {locals()=}")
return x()
return wrapper
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
wrapper executing with locals()={'x': <function target at 0x7f1b2f78f250>}
target()='foo'
target.__name__='wrapper'
As you can see, what the decorator left us is wrapper, not what we originally defined as target. And the wrapper is what we call, when we write target().
Wrapping wrappers
This is the kind of behavior we typically expect, when we use decorators. And therefore it is not surprising that multiple decorators stacked together behave the way they do. The are called from the inside out (as explained above) and each adds its own wrapper around what it receives from the one applied before:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper_from_f():
print(f"wrapper_from_f executing with {locals()=}")
return x()
return wrapper_from_f
def g(x):
print(f"applied `g({x.__name__})`")
def wrapper_from_g():
print(f"wrapper_from_g executing with {locals()=}")
return x()
return wrapper_from_g
#g
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
applied `g(wrapper_from_f)`
wrapper_from_g executing with locals()={'x': <function f.<locals>.wrapper_from_f at 0x7fbfc8d64f70>}
wrapper_from_f executing with locals()={'x': <function target at 0x7fbfc8d65630>}
target()='foo'
target.__name__='wrapper_from_g'
This shows very clearly the difference between the order in which the decorators are called and the order in which the wrapped/wrapping functions are called.
After the decoration is done, we are left with wrapper_from_g, which is referenced by our target name in global namespace. When we call it, wrapper_from_g executes and calls wrapper_from_f, which in turn calls the original target.

does modified function in python decorator has to return a value

I'm trying to understand the behavior of decorator.
I understand that a decorator has to return an object so I can understand the syntax below:
def my_deco(fonction):
print("Deco is called with parameter the function {0}".format(fonction))
return fonction
#my_deco
def hello():
print("hello !")
Deco is called with parameter the function <function salut at 0x00BA5198>
Here the decorator does not do much, but in the case I need to modify the function, I'd define a decorator like this
def my_deco(fonction):
def modified_func():
print("Warning ! calling {0}".format(fonction))
return fonction()
return modified_func
#my_deco
def hello():
print("Salut !")
The initial function behavior is modified through modified_func.This is fine
It includes the call to the initial function. This is fine
Now what I don't understand is: why do we return the result of the function? in my case the function is a simple 'print' so I don't get why I should return something
Thanks for your explanation
As it is in the comments: usually when you write a decorator, you make it so that it can be used with any possible function. And the way to do that is to return either whatever the original function returned, or transform that return value (which can also be done in the wrapper function).
In Python, all functions actually do return something. Functions without an explicit return statement return the value None. So, if your wrpper function, inside the decorator, always returns whatever the decorated function returned, it will be on the safe side: even if the decorated function had no explicit return, it will return a None that is just forwarded by your wrapper.
Now, that is not "mandatory". If you know beforehand that your decorator will only be applied to functions with no return value, you are free not to put a return statement in the wrapper function as well - it is not an incorrect syntax (but it is likely a trap for your future self).

mechanism behind decoration in python

Below is an example of decorator in python. I don't quite get how it actually works for the doubly decorated decorator.
from functools import update_wrapper
def decorator(d):
print(d.__name__)
return lambda fn: update_wrapper(d(fn),fn)
decorator=decorator(decorator) #I don't understand how this works.
#decorator
def n_ary(f):
print(f.__name__)
def n_ary_f(x,*args):
return x if not args else f(x,n_ary_f(*args))
return n_ary_f
#n_ary
def seq(x,y):return ('seq',x,y)
It seems that the flow should be (I am not sure about it):
decorator is decorated, so it returns lambda fn: update_wrapper(decorator(fn),fn).
n_ary=decorator(n_ary), then n_ary is now updated due to the function of update_wrapper(decorator(n_ary),n_ary)
The third part should be the update of seq, but I don't understand when is the update_wrapper function used.
Decoration is just syntactic sugar for calling another function, and replacing the current function object with the result. The decorator dance you are trying to understand is over-using that fact. Even though it tries to make it easier to produce decorators, I find it doesn't actually add anything and is only creating confusion by not following standard practice.
To understand what is going on, you can substitute the function calls (including decorators being applied) with their return values, and tracking the d references by imagining saved references to the original decorated function object:
decorator=decorator(decorator) replaces the original decorator function with a call to itself. We'll just ignore the print() call here to make substitution easier.
The decorator(decorator) call returns lambda fn:
update_wrapper(d(fn),fn), where d is bound to the original
decorator, so now we have
_saved_reference_to_decorator = decorator
decorator = lambda fn: update_wrapper(_saved_reference_to_decorator(fn), fn)
so update_wrapper() is not actually called yet. It'll only be called when this new decorator lambda is called.
#decorator then calls the above lambda (the one calling _saved_reference_to_decorator(fr) and passing the result to update_wrapper()) and applies that lambda to the def n_ary(f) function:
n_ary = decorator(n_ary)
which expands to:
n_ary = update_wrapper(_saved_reference_to_decorator(n_ary), n_ary)
which is:
_saved_reference_to_n_ary = n_ary
n_ary = update_wrapper(lambda fn: update_wrapper(_saved_reference_to_n_ary(fn), fn), n_ary)
Now, update_wrapper() just copies metadata from the second argument to the first returning the first argument, so that then leaves:
n_ary = lambda fn: update_wrapper(_saved_reference_to_n_ary(fn), fn)
with the right __name__ and such set on the lambda function object.
#n_ary is again a decorator being applied, this time to def seq(x, y), so we get:
seq = n_ary(seq)
which can be expanded to:
seq = update_wrapper(_saved_reference_to_n_ary(seq), seq)
which if we take the return value of update_wrapper() is
seq = _saved_reference_to_n_ary(seq)
with the metadata copied over from the original seq to whatever the original n_ary function returns.
So in the end, all this dance gets you is update_wrapper() being applied to the return value from a decorator, which is the contained wrapper function.
This is all way, way too complicated. The update_wrapper() function has a far more readable helper decorator already provided: #functools.wraps(). Your piece of code could be rewritten to:
import functools
def n_ary(f):
print(f.__name__)
#functools.wraps(f)
def n_ary_f(x,*args):
return x if not args else f(x,n_ary_f(*args))
return n_ary_f
#n_ary
def seq(x,y):return ('seq',x,y)
I simply replaced the #decorator decorator on the n_ary() function definition with a #functools.wraps() decorator on the contained wrapper function that is returned.

Apply different decorators based on a condition

I'm using unittest and nose-parametarized, and want to apply different decorators to a test based on a condition.
I have a test and I want to skip unittest.skip the test or execute it #parameterized.expand(args)based on the arguments passed to args.
I think I need to have another decorator which applies proper decorator to the test , but now sure how.
pseudo code could be something like this :
#validate_data(args)
def test(args):
...
where #validate_data(args) is a decorator which applies unittest.skip if args ==None or #parameterized.expand(args)otherwise
Any comments/suggestions is appreciated.
A decorator can also be called as a function. #decorator is equivalent to decorator(func) and #decorator(args) to decorator(args)(func). So you could return the value of those function returns conditionally in your decorator. Here is an example below:
def parameterized_or_skip(args=None):
if args:
return parameterized.expand(args)
return unittest.skip(reason='No args')
...
#parameterized_or_skip(args)
def my_testcase(self, a, b):
pass

Python function loses identity after being decorated

(Python 3)
First of all, I feel my title isn't quite what it should be, so if you stick through the question and come up with a better title, please feel free to edit it.
I have recently learned about Python Decorators and Python Annotations, and so I wrote two little functions to test what I have recently learned.
One of them, called wraps is supposed to mimic the behaviour of the functools wraps, while the other, called ensure_types is supposed to check, for a given function and through its annotations, if the arguments passed to some function are the correct ones.
This is the code I have for those functions:
def wraps(original_func):
"""Update the decorated function with some important attributes from the
one that was decorated so as not to lose good information"""
def update_attrs(new_func):
# Update the __annotations__
for key, value in original_func.__annotations__.items():
new_func.__annotations__[key] = value
# Update the __dict__
for key, value in original_func.__dict__.items():
new_func.__dict__[key] = value
# Copy the __name__
new_func.__name__ = original_func.__name__
# Copy the docstring (__doc__)
new_func.__doc__ = original_func.__doc__
return new_func
return update_attrs # return the decorator
def ensure_types(f):
"""Uses f.__annotations__ to check the expected types for the function's
arguments. Raises a TypeError if there is no match.
If an argument has no annotation, object is returned and so, regardless of
the argument passed, isinstance(arg, object) evaluates to True"""
#wraps(f) # say that test_types is wrapping f
def test_types(*args, **kwargs):
# Loop through the positional args, get their name and check the type
for i in range(len(args)):
# function.__code__.co_varnames is a tuple with the names of the
##arguments in the order they are in the function def statement
var_name = f.__code__.co_varnames[i]
if not(isinstance(args[i], f.__annotations__.get(var_name, object))):
raise TypeError("Bad type for function argument named '{}'".format(var_name))
# Loop through the named args, get their value and check the type
for key in kwargs.keys():
if not(isinstance(kwargs[key], f.__annotations__.get(key, object))):
raise TypeError("Bad type for function argument named '{}'".format(key))
return f(*args, **kwargs)
return test_types
Supposedly, everything is alright until now. Both the wraps and the ensure_types are supposed to be used as decorators. The problem comes when I defined a third decorator, debug_dec that is supposed to print to the console when a function is called and its arguments. The function:
def debug_dec(f):
"""Does some annoying printing for debugging purposes"""
#wraps(f)
def profiler(*args, **kwargs):
print("{} function called:".format(f.__name__))
print("\tArgs: {}".format(args))
print("\tKwargs: {}".format(kwargs))
return f(*args, **kwargs)
return profiler
That also works cooly. The problem comes when I try to use debug_dec and ensure_types at the same time.
#ensure_types
#debug_dec
def testing(x: str, y: str = "lol"):
print(x)
print(y)
testing("hahaha", 3) # raises no TypeError as expected
But if I change the order with which the decorators are called, it works just fine.
Can someone please help me understand what is going wrong, and if is there any way of solving the problem besides swapping those two lines?
EDIT
If I add the lines:
print(testing.__annotations__)
print(testing.__code__.co_varnames)
The output is as follows:
#{'y': <class 'str'>, 'x': <class 'str'>}
#('args', 'kwargs', 'i', 'var_name', 'key')
Although wraps maintains the annotations, it doesn't maintain the function signature. You see this when you print out the co_varnames. Since ensure_types does its checking by comparing the names of the arguments with the names in the annotation dict, it fails to match them up, because the wrapped function has no arguments named x and y (it just accepts generic *args and **kwargs).
You could try using the decorator module, which lets you write decorators that act like functools.wrap but also preserve the function signature (including annotations).
There is probably also a way to make it work "manually", but it would be a bit of a pain. Basically what you would have to do is have wraps store the original functions argspec (the names of its arguments), then have ensure_dict use this stored argspec instead of the wrapper's argspec in checking the types. Essentially your decorators would pass the argspec in parallel with the wrapped functions. However, using decorator is probably easier.

Categories

Resources