Below is an example of decorator in python. I don't quite get how it actually works for the doubly decorated decorator.
from functools import update_wrapper
def decorator(d):
print(d.__name__)
return lambda fn: update_wrapper(d(fn),fn)
decorator=decorator(decorator) #I don't understand how this works.
#decorator
def n_ary(f):
print(f.__name__)
def n_ary_f(x,*args):
return x if not args else f(x,n_ary_f(*args))
return n_ary_f
#n_ary
def seq(x,y):return ('seq',x,y)
It seems that the flow should be (I am not sure about it):
decorator is decorated, so it returns lambda fn: update_wrapper(decorator(fn),fn).
n_ary=decorator(n_ary), then n_ary is now updated due to the function of update_wrapper(decorator(n_ary),n_ary)
The third part should be the update of seq, but I don't understand when is the update_wrapper function used.
Decoration is just syntactic sugar for calling another function, and replacing the current function object with the result. The decorator dance you are trying to understand is over-using that fact. Even though it tries to make it easier to produce decorators, I find it doesn't actually add anything and is only creating confusion by not following standard practice.
To understand what is going on, you can substitute the function calls (including decorators being applied) with their return values, and tracking the d references by imagining saved references to the original decorated function object:
decorator=decorator(decorator) replaces the original decorator function with a call to itself. We'll just ignore the print() call here to make substitution easier.
The decorator(decorator) call returns lambda fn:
update_wrapper(d(fn),fn), where d is bound to the original
decorator, so now we have
_saved_reference_to_decorator = decorator
decorator = lambda fn: update_wrapper(_saved_reference_to_decorator(fn), fn)
so update_wrapper() is not actually called yet. It'll only be called when this new decorator lambda is called.
#decorator then calls the above lambda (the one calling _saved_reference_to_decorator(fr) and passing the result to update_wrapper()) and applies that lambda to the def n_ary(f) function:
n_ary = decorator(n_ary)
which expands to:
n_ary = update_wrapper(_saved_reference_to_decorator(n_ary), n_ary)
which is:
_saved_reference_to_n_ary = n_ary
n_ary = update_wrapper(lambda fn: update_wrapper(_saved_reference_to_n_ary(fn), fn), n_ary)
Now, update_wrapper() just copies metadata from the second argument to the first returning the first argument, so that then leaves:
n_ary = lambda fn: update_wrapper(_saved_reference_to_n_ary(fn), fn)
with the right __name__ and such set on the lambda function object.
#n_ary is again a decorator being applied, this time to def seq(x, y), so we get:
seq = n_ary(seq)
which can be expanded to:
seq = update_wrapper(_saved_reference_to_n_ary(seq), seq)
which if we take the return value of update_wrapper() is
seq = _saved_reference_to_n_ary(seq)
with the metadata copied over from the original seq to whatever the original n_ary function returns.
So in the end, all this dance gets you is update_wrapper() being applied to the return value from a decorator, which is the contained wrapper function.
This is all way, way too complicated. The update_wrapper() function has a far more readable helper decorator already provided: #functools.wraps(). Your piece of code could be rewritten to:
import functools
def n_ary(f):
print(f.__name__)
#functools.wraps(f)
def n_ary_f(x,*args):
return x if not args else f(x,n_ary_f(*args))
return n_ary_f
#n_ary
def seq(x,y):return ('seq',x,y)
I simply replaced the #decorator decorator on the n_ary() function definition with a #functools.wraps() decorator on the contained wrapper function that is returned.
Related
To start with, my question here is about the semantics and the logic behind why the Python language was designed like this in the case of chained decorators. Please notice the nuance how this is different from the question
How decorators chaining work?
Link: How decorators chaining work? It seems quite a number of other users had the same doubts, about the call order of chained Python decorators. It is not like I can't add a __call__ and see the order for myself. I get this, my point is, why was it designed to start from the bottom, when it comes to chained Python decorators?
E.g.
def first_func(func):
def inner():
x = func()
return x * x
return inner
def second_func(func):
def inner():
x = func()
return 2 * x
return inner
#first_func
#second_func
def num():
return 10
print(num())
Quoting the documentation on decorators:
The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:
def f(arg):
...
f = staticmethod(f)
#staticmethod
def f(arg):
...
From this it follows that the decoration in
#a
#b
#c
def fun():
...
is equivalent to
fun = a(b(c(fun)))
IOW, it was designed like that because it's just syntactic sugar.
For proof, let's just decorate an existing function and not return a new one:
def dec1(f):
print(f"dec1: got {vars(f)}")
f.dec1 = True
return f
def dec2(f):
print(f"dec2: got {vars(f)}")
f.dec2 = True
return f
#dec1
#dec2
def foo():
pass
print(f"Fully decked out: {vars(foo)}")
prints out
dec2: got {}
dec1: got {'dec2': True}
Fully decked out: {'dec2': True, 'dec1': True}
TL;DR
g(f(x)) means applying f to x first, then applying g to the output.
Omit the parentheses, add # before and line break after each function name:
#g
#f
x
(Syntax only valid if x is the definition of a function/class.)
Abstract explanation
The reasoning behind this design decision becomes fairly obvious IMHO, if you remember what the decorator syntax - in its most abstract and general form - actually means. So I am going to try the abstract approach to explain this.
It is all about syntax
To be clear here, the distinguishing factor in the concept of the "decorator" is not the object underneath it (so to speak) nor the operation it performs. It is the special syntax and the restrictions for it. Thus, a decorator at its core is nothing more than feature of Python grammar.
The decorator syntax requires a target to be decorated. Initially (see PEP 318) the target could only be function definitions; later class definitions were also allowed to be decorated (see PEP 3129).
Minimal valid syntax
Syntactically, this is valid Python:
def f(): pass
#f
class Target: pass # or `def target: pass`
However, this will (perhaps unsuprisingly) cause a TypeError upon execution. As has been reiterated multiple times here and in other posts on this platform, the above is equivalent to this:
def f(): pass
class Target: pass
Target = f(Target)
Minimal working decorator
The TypeError stems from the fact that f lacks a positional argument. This is the obvious logical restriction imposed by what a decorator is supposed to do. Thus, to achieve not only syntactically valid code, but also have it run without errors, this is sufficient:
def f(x): pass
#f
class Target: pass
This is still not very useful, but it is enough for the most general form of a working decorator.
Decoration is just application of a function to the target and assigning the output to the target's name.
Chaining functions ⇒ Chaining decorators
We can ignore the target and what it is or does and focus only on the decorator. Since it merely stands for applying a function, the order of operations comes into play, as soon as we have more than one. What is the order of operation, when we chain functions?
def f(x): pass
def g(x): pass
class Target: pass
Target = g(f(Target))
Well, just like in the composition of purely mathematical functions, this implies that we apply f to Target first and then apply g to the result of f. Despite g appearing first (i.e. further left), it is not what is applied first.
Since stacking decorators is equivalent to nesting functions, it seems obvious to define the order of operation the same way. This time, we just skip the parentheses, add an # symbol in front of the function name and a line break after it.
def f(x): pass
def g(x): pass
#g
#f
class Target: pass
But, why though?
If after the explanation above (and reading the PEPs for historic background), the reasoning behind the order of operation is still not clear or still unintuitive, there is not really any good answer left, other than "because the devs thought it made sense, so get used to it".
PS
I thought I'd add a few things for additional context based on all the comments around your question.
Decoration vs. calling a decorated function
A source of confusion seems to be the distinction between what happens when applying the decorator versus calling the decorated function.
Notice that in my examples above I never actually called target itself (the class or function being decorated). Decoration is itself a function call. Adding #f above the target is calling the f and passing the target to it as the first positional argument.
A "decorated function" might not even be a function
The distinction is very important because nowhere does it say that a decorator actually needs to return a callable (function or class). f being just a function means it can return whatever it wants. This is again valid and working Python code:
def f(x): return 3.14
#f
def target(): return "foo"
try:
target()
except Exception as e:
print(repr(e))
print(target)
Output:
TypeError("'float' object is not callable")
3.14
Notice that the name target does not even refer to a function anymore. It just holds the 3.14 returned by the decorator. Thus, we cannot even call target. The entire function behind it is essentially lost immediately before it is even available to the global namespace. That is because f just completely ignores its first positional argument x.
Replacing a function
Expanding this further, if we want, we can have f return a function. Not doing that seems very strange, considering it is used to decorate a function. But it doesn't have to be related to the target at all. Again, this is fine:
def bar(): return "bar"
def f(x): return bar
#f
def target(): return "foo"
print(target())
print(target is bar)
Output:
bar
True
It comes down to convention
The way decorators are actually overwhelmingly used out in the wild, is in a way that still keeps a reference to the target being decorated around somewhere. In practice it can be as simple as this:
def f(x):
print(f"applied `f({x.__name__})`")
return
#f
def target(): return "foo"
Just running this piece of code outputs applied f(target). Again, notice that we don't call target here, we only called f. But now, the decorated function is still target, so we could add the call print(target()) at the bottom and that would output foo after the other output produced by f.
The fact that most decorators don't just throw away their target comes down to convention. You (as a developer) would not expect your function/class to simply be thrown away completely, when you use a decorator.
Decoration with wrapping
This is why real-life decorators typically either return the reference to the target at the end outright (like in the last example) or they return a different callable, but that callable itself calls the target, meaning a reference to the target is kept in that new callable's local namespace . These functions are what is usually referred to as wrappers:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper():
print(f"wrapper executing with {locals()=}")
return x()
return wrapper
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
wrapper executing with locals()={'x': <function target at 0x7f1b2f78f250>}
target()='foo'
target.__name__='wrapper'
As you can see, what the decorator left us is wrapper, not what we originally defined as target. And the wrapper is what we call, when we write target().
Wrapping wrappers
This is the kind of behavior we typically expect, when we use decorators. And therefore it is not surprising that multiple decorators stacked together behave the way they do. The are called from the inside out (as explained above) and each adds its own wrapper around what it receives from the one applied before:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper_from_f():
print(f"wrapper_from_f executing with {locals()=}")
return x()
return wrapper_from_f
def g(x):
print(f"applied `g({x.__name__})`")
def wrapper_from_g():
print(f"wrapper_from_g executing with {locals()=}")
return x()
return wrapper_from_g
#g
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
applied `g(wrapper_from_f)`
wrapper_from_g executing with locals()={'x': <function f.<locals>.wrapper_from_f at 0x7fbfc8d64f70>}
wrapper_from_f executing with locals()={'x': <function target at 0x7fbfc8d65630>}
target()='foo'
target.__name__='wrapper_from_g'
This shows very clearly the difference between the order in which the decorators are called and the order in which the wrapped/wrapping functions are called.
After the decoration is done, we are left with wrapper_from_g, which is referenced by our target name in global namespace. When we call it, wrapper_from_g executes and calls wrapper_from_f, which in turn calls the original target.
I'm trying to understand the behavior of decorator.
I understand that a decorator has to return an object so I can understand the syntax below:
def my_deco(fonction):
print("Deco is called with parameter the function {0}".format(fonction))
return fonction
#my_deco
def hello():
print("hello !")
Deco is called with parameter the function <function salut at 0x00BA5198>
Here the decorator does not do much, but in the case I need to modify the function, I'd define a decorator like this
def my_deco(fonction):
def modified_func():
print("Warning ! calling {0}".format(fonction))
return fonction()
return modified_func
#my_deco
def hello():
print("Salut !")
The initial function behavior is modified through modified_func.This is fine
It includes the call to the initial function. This is fine
Now what I don't understand is: why do we return the result of the function? in my case the function is a simple 'print' so I don't get why I should return something
Thanks for your explanation
As it is in the comments: usually when you write a decorator, you make it so that it can be used with any possible function. And the way to do that is to return either whatever the original function returned, or transform that return value (which can also be done in the wrapper function).
In Python, all functions actually do return something. Functions without an explicit return statement return the value None. So, if your wrpper function, inside the decorator, always returns whatever the decorated function returned, it will be on the safe side: even if the decorated function had no explicit return, it will return a None that is just forwarded by your wrapper.
Now, that is not "mandatory". If you know beforehand that your decorator will only be applied to functions with no return value, you are free not to put a return statement in the wrapper function as well - it is not an incorrect syntax (but it is likely a trap for your future self).
I have a decorator function my_fun(I,k) and it is applied to a function add(x,y) as such
#my_fun(4,5)
def add(x,y): return x+y
I am new to Python would like to know if I am writing the my_fun function
How can I access x,y in the add method in my_fun?
How can I access the return value of add in the decorator function?
I am a little confused on syntax and concepts any explanation would be help.
A decorator consists of the decorator function and a function wrapper (and if you want additional arguments for the decorator another outer layer of function around it):
# Takes the arguments for the decorator and makes them accessible inside
def my_fun(decorator_argument1, decorator_argument2):
# Takes the function so that it can be wrapped.
def wrapfunc(func):
# Here we are actually going to wrap the function ... finally
def wrapper(*args, **kwargs):
# Call the function with the args and kwargs
res = func(*args, **kwargs)
# return this result
return res
# Replace the decorated function with the wrapper
return wrapper
# Return the wrapper for the function wrapper :-)
return wrapfunc
In your case if you only want to use the decorator with your function you don't need to bother with the *args, **kwargs and replace it by:
def wrapper(x, y):
# Here you can do stuff with x and y, i.e. print(x)
# Call the function with x and y
res = func(x, y)
# Here you can do stuff with the result, i.e. res = res * decorator_argument1
return res
I indicated the places where you can access x and y and the result.
If you want to predefine values for x and y a custom decorator is not the best way. You could use defaults:
def add(x=4,y=5): return x+y
add() # returns 9
add(2) # returns 7
add(5, 10) # returns 15
or if you want to fix an argument you should use functools.partial
If you're passing arguments to the decorator with #my_fun(4, 5), you need three levels of nested functions to implement the decorator in the simplest way. The outer level is the "decorator factory". It returns the middle level function, the decorator. The decorator gets called with the function it's decorating as an argument and needs to return the inner most nested function, the wrapper. The wrapper function is the one that gets called by the user.
def decorator_factory(deco_arg, deco_arg2): # name this whatever you want to use with #syntax
def decorator(func):
def wrapper(func_arg, func_arg2):
# This is a closure!
# In here you can write code using the arguments from the enclosing scpoes. e.g.:
return func(func_arg*deco_arg, func_arg2*deco_arg2) # uses args from all levels
return wrapper
return decorator
The inner functions here are closures. They can see the variables in the scope surrounding the place they were defined in, even after the functions those scope belonged to have finished running.
(Note, if you want your decorator to be able to decorate many different functions, you may want the wrapper function to accept *args and **kwargs and pass them along to func. The example above only works for functions that accept exactly two arguments. A limitation like that may be perfectly reasonable for some uses, but not always.)
Suppose I have a python function
def func(self):
self.method_1()
self.method_2()
How can I write an unit test that can assert method_1 is called before method_2?
#mock.patch(method_1)
#mock.patch(method_2)
def test_call_order(method_2_mock, method_1_mock):
# Test the order
Your case is a slight variation of Python Unit Testing with two mock objects, how to verify call-order?. What you should do is set method_2_mock, method_1_mock as children of one new mock object and then ask for mock_calls attribute or use assert_has_calls:
#mock.patch(method_1)
#mock.patch(method_2)
def test_call_order(method_2_mock, method_1_mock):
mock_parent = Mock()
mock_parent.m1, mock_parent.m2 = method_1_mock, method_2_mock
<test code>
"""Check if method 1 is called before method 2"""
mock_parent.assert_has_calls([call.m1(), call.m2()])
There are lot of details omitted in this code like call arguments. Take a look to call and the very useful ANY helper.
ATTENTION
This is valid just for unitetest.mock in python3. For python 2.7 and mock 1.0.1 you should use attach_mock instead.
Another option is to create a simple list and append each mock to it through side_effect.
#mock.patch(method_1)
#mock.patch(method_2)
def test_call_order(method_2_mock, method_1_mock):
call_order = []
method_1_mock.side_effect = lambda *a, **kw: call_order.append(method_1_mock)
method_2_mock.side_effect = lambda *a, **kw: call_order.append(method_2_mock)
# Run test code...
assert call_order == [method_1_mock, method_2_mock]
Each time the method is called, the side_effect lambda function is called. Since lists are ordered, this is a clean way to check the call order of your methods.
Improving on the second approach, by Chris Collett.
Using side_effect like that makes the call return None. This is a problem if you need the mock method to return a value. A simple solution is to use a helper function:
call_order = []
def log_ret(func, ret_val):
call_order.append(func)
return ret_val
method_1_mock.side_effect = lambda *a, **kw: log_ret(method_1_mock, 'return_value_1')
method_2_mock.side_effect = lambda *a, **kw: log_ret(method_2_mock, 'return_value_2')
Cheers
Let's assume we have a decorator:
def decor(function):
def result():
printf('decorated')
return function()
return result
What is the difference between following code:
#decor
def my_foo():
print('my_foo')
and:
def my_foo():
print('my_foo')
my_foo = decor(my_foo)
Your last code snippet is almost the definition of a decorator. The only difference is that in the first case, the name decor is evaluated before the function definition, while in the second case it is evaluated after the function definition. This only makes a difference if executing the function definition changes what the name refers to.
Nonsensical example:
def decor(function):
def result():
printf('decorated')
return function()
return result
def plonk():
global decor
decor = lambda x: x
return None
Now
#decor
def my_foo(foo=plonk()):
print('my_foo')
is different from
def my_foo(foo=plonk()):
print('my_foo')
my_foo = decor(my_foo)
There isn't a difference. The #decorator syntax simply makes it easier to understand that a decorator is being applied. (This is an example of syntactic sugar.)
If there is a difference, is that python versions prior to Python 2.4 does not support the #decorator syntax while the explicit decorator call is supported since the stone age. Also, the #decorator syntax had to be applied at function definition and had to use the same function name, while the explicit decorator call can be applied later and can rename the decorated function.
Use the #decorator syntax unless you had a really, really, really good reason not to; which is almost never.