Python decorators implementation - python

I am trying to understand python decorators so I decided to check the inner workings of django #login_required decorator. After looking at the source code, I got here:
def user_passes_test(test_func, login_url=None,
redirect_field_name=REDIRECT_FIELD_NAME):
"""
Decorator for views that checks that the user passes the given test,
redirecting to the log-in page if necessary. The test should be a callable
that takes the user object and returns True if the user passes.
"""
From what I understand, the above function is supposed to use the return value of test_func to determine whether to redirect user to login page or allow user to continue. My problem is that i can't find anywhere this test_func is called. How does the above function user_passes_test work exactly. Any help will be appreciated.
edit:
I have realized that I am the problem since I was looking at the source code on django documentation and I have just noticed that there is an indentation error. Am all good now.

It's called in this line.
To explain what's going on, let's use a simpler example. Assume you have various functions (corresponding to the views) that need to have their arguments checked (corresponding to the user check) before being executed; if the arguments fail the check, the result should be None (corresponding to the login redirect).
Here's example code, the structure of which matches Django's user_passes_test:
def arguments_pass_test(test_func):
def decorator(func):
def _wrapped_func(*args, **kwargs):
if test_func(*args, **kwargs):
return func(*args, **kwargs)
else:
return None
return _wrapped_func
return decorator
#arguments_pass_test(lambda x: x != 0)
def inverse(x):
return 1.0 / x
print(inverse(2)) # 0.5
print(inverse(0)) # None
So let's look at how the decorator magic happens, that is this part:
#arguments_pass_test(lambda x: x != 0)
def inverse(x):
return 1.0 / x
First off, arguments_pass_test(lambda x: x != 0) is just a function call; the # doesn't come into play yet. The return value from that function call to arguments_pass_test is the inner function called decorator, and the naming makes it clear that this function is the actual decorator.
So now we have this:
#decorator
def inverse(x):
return 1.0 / x
The decorator syntax gets translated by Python into something roughly equivalent to this:
def inverse(x):
return 1.0 / x
inverse = decorator(inverse)
So inverse gets replaced with the the result of calling the decorator with the original function inverse as its argument func. The result of calling the decorator is what's called _wrapped_func. So what's happening is similar to this:
def _original_inverse(x):
return 1.0 / x
def _wrapped_func(x):
if test_func(x): # this doesn't exist here, but bear with me
return _original_inverse(x)
else
return None
inverse = _wrapped_func
which is finally equivalent(ish) to
def inverse(x):
if x != 0:
return 1.0 / x
else
return None
which is exactly what we were going for.

I'm not sure where you got that function, but I just freshly install django through pip on my Python 3.6, and here is what I found in django.contrib.auth.decorators. The following is accessible here as well.
def user_passes_test(test_func, login_url=None, redirect_field_name=REDIRECT_FIELD_NAME):
"""
Decorator for views that checks that the user passes the given test,
redirecting to the log-in page if necessary. The test should be a callable
that takes the user object and returns True if the user passes.
"""
def decorator(view_func):
#wraps(view_func, assigned=available_attrs(view_func))
def _wrapped_view(request, *args, **kwargs):
if test_func(request.user):
return view_func(request, *args, **kwargs)
path = request.build_absolute_uri()
resolved_login_url = resolve_url(login_url or settings.LOGIN_URL)
# If the login url is the same scheme and net location then just
# use the path as the "next" url.
login_scheme, login_netloc = urlparse(resolved_login_url)[:2]
current_scheme, current_netloc = urlparse(path)[:2]
if ((not login_scheme or login_scheme == current_scheme) and
(not login_netloc or login_netloc == current_netloc)):
path = request.get_full_path()
from django.contrib.auth.views import redirect_to_login
return redirect_to_login(
path, resolved_login_url, redirect_field_name)
return _wrapped_view
return decorator
This makes much more sense, since test_func is effectively called inside of the decorator.

Related

Python 3 - Decorators execution flow

The below example is taken from python cookbook 3rd edition section 9.5.
I placed break points at each line to understand the flow of execution . Below is the code sample, its output and the questions I have . I have tried to explain my question , let me know if you need further info.
from functools import wraps, partial
import logging
# Utility decorator to attach a function as an attribute of obj
def attach_wrapper(obj, func=None):
if func is None:
return partial(attach_wrapper, obj)
setattr(obj, func.__name__, func)
return func
def logged(level, name=None, message=None):
def decorate(func):
logname = name if name else func.__module__
log = logging.getLogger(logname)
logmsg = message if message else func.__name__
#wraps(func)
def wrapper(*args, **kwargs):
log.log(level, logmsg)
return func(*args, **kwargs)
#attach_wrapper(wrapper)
def set_message(newmsg):
nonlocal logmsg
logmsg = newmsg
return wrapper
return decorate
# Example use
#logged(logging.DEBUG)
def add(x, y):
return x + y
logging.basicConfig(level=logging.DEBUG)
add.set_message('Add called')
#add.set_level(logging.WARNING)
print (add(2, 3))
output is
DEBUG:__main__:Add called
5
I understand the concept of decorators, but this is confusing a little.
scenario 1. When the following line is debugged #logged(logging.DEBUG) , we get
decorate = .decorate at 0x000000000< memoryaddress >>
Question : why would the control go back to execute the function " def decorate" ? Is it because the "decorate" function is on the top of the stack ?
scenario 2 :When executing #attach_wrapper(wrapper) , the control goes to execute attach_wrapper(obj, func=None) and partial function returns
func =
question : why would the control go back to execute def attach_wrapper(obj, func=None):
and how would this time the value for func is *.decorate..set_message at 0x000000000 >
being passed to the attach_wrapper ?
Scenario 1
This:
#logged(logging.DEBUG)
def add(x, y):
....
is the same as this:
def add(x, y):
....
add = logged(logging.DEBUG)(add)
Note that there are two calls there: first logged(logging.DEBUG) returns decorate and then decorate(add) is called.
Scenario 2
Same as in Scenario 1, this:
#attach_wrapper(wrapper)
def set_message(newmsg):
...
is the same as this:
def set_message(newmsg):
...
set_message = attach_wrapper(wrapper)(set_message)
Again, there are two calls: first attach_wrapper(wrapper) returns the partial object and then partial(set_message) is called.
In other words...
logged and attach_wrapper are not decorators. Those are functions which return decorators. That is why two calls are made: one to the function which returns the decorator and another the the decorator itself.

Getting the value of a mutable keyword argument of a decorator

I have the following code, in which I simply have a decorator for caching a function's results, and as a concrete implementation, I used the Fibonacci function.
After playing around with the code, I wanted to print the cache variable, that's initiated in the cache wrapper.
(It's not because I suspect the cache might be faulty, I simply want to know how to access it without going into debug mode and put a breakpoint inside the decorator)
I tried to explore the fib_w_cache function in debug mode, which is supposed to actually be the wrapped fib_w_cache, but with no success.
import timeit
def cache(f, cache = dict()):
def args_to_str(*args, **kwargs):
return str(args) + str(kwargs)
def wrapper(*args, **kwargs):
args_str = args_to_str(*args, **kwargs)
if args_str in cache:
#print("cache used for: %s" % args_str)
return cache[args_str]
else:
val = f(*args, **kwargs)
cache[args_str] = val
return val
return wrapper
#cache
def fib_w_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_w_cache(n-2) + fib_w_cache(n-1)
def fib_wo_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_wo_cache(n-1) + fib_wo_cache(n-2)
print(timeit.timeit('[fib_wo_cache(i) for i in range(0,30)]', globals=globals(), number=1))
print(timeit.timeit('[fib_w_cache(i) for i in range(0,30)]', globals=globals(), number=1))
I admit this is not an "elegant" solution in a sense, but keep in mind that python functions are also objects. So with some slight modification to your code, I managed to inject the cache as an attribute of a decorated function:
import timeit
def cache(f):
def args_to_str(*args, **kwargs):
return str(args) + str(kwargs)
def wrapper(*args, **kwargs):
args_str = args_to_str(*args, **kwargs)
if args_str in wrapper._cache:
#print("cache used for: %s" % args_str)
return wrapper._cache[args_str]
else:
val = f(*args, **kwargs)
wrapper._cache[args_str] = val
return val
wrapper._cache = {}
return wrapper
#cache
def fib_w_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_w_cache(n-2) + fib_w_cache(n-1)
#cache
def fib_w_cache_1(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_w_cache(n-2) + fib_w_cache(n-1)
def fib_wo_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_wo_cache(n-1) + fib_wo_cache(n-2)
print(timeit.timeit('[fib_wo_cache(i) for i in range(0,30)]', globals=globals(), number=1))
print(timeit.timeit('[fib_w_cache(i) for i in range(0,30)]', globals=globals(), number=1))
print(fib_w_cache._cache)
print(fib_w_cache_1._cache) # to prove that caches are different instances for different functions
cache is of course a perfectly normal local variable in scope within the cache function, and a perfectly normal nonlocal cellvar in scope within the wrapper function, so if you want to access the value from there, you just do it—as you already are.
But what if you wanted to access it from somewhere else? Then there are two options.
First, cache happens to be defined at the global level, meaning any code anywhere (that hasn't hidden it with a local variable named cache) can access the function object.
And if you're trying to access the values of a function's default parameters from outside the function, they're available in the attributes of the function object. The inspect module docs explain the inspection-oriented attributes of each builtin type:
__defaults__ is a sequence of the values for all positional-or-keyword parameters, in order.
__kwdefaults__ is a mapping from keywords to values for all keyword-only parameters.
So:
>>> def f(a, b=0, c=1, *, d=2, e=3): pass
>>> f.__defaults__
(0, 1)
>>> f.__kwdefaults__
{'e': 3, 'd': 2}
So, for a simple case where you know there's exactly one default value and know which argument it belongs to, all you need is:
>>> cache.__defaults__[0]
{}
If you need to do something more complicated or dynamic, like get the default value for c in the f function above, you need to dig into other information—the only way to know that c's default value will be the second one in __defaults__ is to look at the attributes of the function's code object, like f.__code__.co_varnames, and figure it out from there. But usually, it's better to just use the inspect module's helpers. For example:
>>> inspect.signature(f).parameters['c'].default
1
>>> inspect.signature(cache).parameters['cache'].default
{}
Alternatively, if you're trying to access the cache from inside fib_w_cache, while there's no variable in lexical scope in that function body you can look at, you do know that the function body is only called by the decorator wrapper, and it is available there.
So, you can get your stack frame
frame = inspect.currentframe()
… follow it back to your caller:
back = frame.f_back
… and grab it from that frame's locals:
back.f_locals['cache']
It's worth noting that f_locals works like the locals function: it's actually a copy of the internal locals storage, so modifying it may have no effect, and that copy flattens nonlocal cell variables to regular local variables. If you wanted to access the actual cell variable, you'd have to grub around in things like back.f_code.co_freevars to get the index and then dig it out of the function object's __closure__. But usually, you don't care about that.
Just for a sake of completeness, python has caching decorator built-in in functools.lru_cache with some inspecting mechanisms:
from functools import lru_cache
#lru_cache(maxsize=None)
def fib_w_cache(n):
if n == 0: return 0
elif n == 1: return 1
else:
return fib_w_cache(n-2) + fib_w_cache(n-1)
print('fib_w_cache(10) = ', fib_w_cache(10))
print(fib_w_cache.cache_info())
Prints:
fib_w_cache(10) = 55
CacheInfo(hits=8, misses=11, maxsize=None, currsize=11)
I managed to find a solution (in some sense by #Patrick Haugh's advice).
I simply accessed cache.__defaults__[0] which holds the cache's dict.
The insights about the shared cache and how to avoid it we're also quite useful.
Just as a note, the cache dictionary can only be accessed through the cache function object. It cannot be accessed through the decorated functions (at least as far as I understand). It logically aligns well with the fact that the cache is shared in my implementation, where on the other hand, in the alternative implementation that was proposed, it is local per decorated function.
You can make a class into a wrapper.
def args_to_str(*args, **kwargs):
return str(args) + str(kwargs)
class Cache(object):
def __init__(self, func):
self.func = func
self.cache = {}
def __call__(self, *args, **kwargs):
args_str = args_to_str(*args, **kwargs)
if args_str in self.cache:
return self.cache[args_str]
else:
val = self.func(*args, **kwargs)
self.cache[args_str] = val
return val
Each function has its own cache. you can access it by calling function.cache. This also allows for any methods you wish to attach to your function.
If you wanted all decorated functions to share the same cache, you could use a class variable instead of an instance variable:
class SharedCache(object):
cache = {}
def __init__(self, func):
self.func = func
#rest of the the code is the same
#SharedCache
def function_1(stuff):
things

Python Flask Redirect

def check():
if 3>2:
return redirect(url_for('control'))
#app.route('/reason', methods=['GET','POST'])
def reason():
check()
return render_template('reason.html')
#app.route('/control', methods=['GET','POST']
def control():
return render_template('control.html')
I have two html files (control.html and reason.html). the code loads reason.html page first and receives user input using POST method then after doing certain checks, I want to reload control.html.
the problem is I am not able to load control.html
You are ignoring the return value from check(). Consequently, the object that will signal a redirect is simply lost.
You should inspect the return value from check() and either return that result, or the 'reason.html' result:
# UNTESTED
#app.route('/reason', methods=['GET','POST'])
def reason():
check_result = check()
if check_result:
return check_result
return render_template('reason.html')
On a related note, your check() has two return paths. One explicitly invokes return, the other implicitly returns None as it falls off the end of the function. This is perfectly legal, but confusing stylistically. Try this:
def check():
if 3>2:
return redirect(url_for('control'))
return None
Perhaps a more understandable code arrangement would be to have check limit its responsibilities to simply checking and reporting the result; then reason() can be responsible for whatever page is displayed, like so:
#UNTESTED
def check():
if 3>2:
return True
return False
# UNTESTED
#app.route('/reason', methods=['GET','POST'])
def reason():
if check():
return redirect(url_for('control'))
return render_template('reason.html')

Find out how many return-values a function has

I'm creating some tests for some laser-driver-functionality. For some lasers a piece of sub-functionality is supported, on others I should expect an error (and thus test that the error is raised).
A sub-functionality is of-course (to keep it simple ;-) ) not 1 function, but multiple functions. So given the laser the test for a certain function should pass or fail.
The pass case is not a problem, but when it should fail and a function has multiple return-values the generic assert we created has the problem that it does not know how many return-values it should return to the test-function and thus our generic assert only returns 1 None-value in case of a laser that does not support the sub-functionality.
Because of this we have to check in the test for functions with multiple return-values if it should fail or pass and only than we can call our generic assert in the correct way.
But this is kind of ugly IMHO, since in that case we check if the sub-functionality is supported twice (where the goal of the generic assert was to do that in one place).
Is there a way to determine the number of return-values a function has upfront to calling the function?
The problem in pseudo-code:
The generic assert (plus the base-setup of the test):
import functionality_under_test as fut
class Functionality( unittest.TestCase ):
def shared_setup( self ):
Functionality._issupported = fut.subfunctionality.is_supported()
def _assert_sub_functionality( self, func, *args, **kwargs ):
retval = None # We don't know how many retvals there will be
if not self._issupported:
# Here is the place where it should determine how many
# return-values func really has and initialize retval as
# with the correct number of None's in a tuple / one None.
self.assert_raises( <expected error>, func, *args, **kwargs )
else:
# retval will be filled with either 1 value or a tuple by
# successful call.
retval = func( *args, **kwargs )
return retval
For functions with 1 return-value there is not a problem and thus we can do this:
def test_subfunc_func_1_retval( self ):
only_retval = self._assert_sub_functionality( subfunc_func_1_retval, <args>, <kwargs> )
<checks based on retval not being None>
But for testing functions with multiple return-values this does not work (raises an error since only 1 return is given where the test-function would expect multiple). So we made it like:
def test_subfunc_func_2_retvals( self ):
if not self._issupported:
# 'Fail' case, where an error is expected
retval1, retval2 = None
_ = self._assert_sub_functionality( subfunc_func_2_retvals, <args>, <kwargs> )
else:
# Pass-case, no error expected. This works fine.
retval1, retval2 = self._assert_sub_functionality( subfunc_func_2_retvals, <args>, <kwargs> )
<checks based on the retvals not being None>
The extra check on self._issupported here is kind of ugly IMHO.
Can this be done in a different way?
Can _assert_sub_functionality determine how many return-values a function has and return that many times None in case the subfunctionality is not supported?
TIA
To begin with, functions have a single return value, but it could be a tuple.
So, your question is, can you statically analyze a Python function and deduce that it returns only specific tuples, namely those that are of a certain size.
Unfortunately, the answer is no - it's easy to show that this would solve the Halting Problem.
Looking at the answer both Ami Tavory and David Ehrmann gave (and at the Halting Problem wiki-page ;-) ) I came to the following solution:
def _assert_sub_functionality( self, func, *args, **kwargs ):
supported = self._issupported
retval = None
if not supported:
self.assert_raises( <expected error>, func, *args, **kwargs )
else:
# retval will be filled with either 1 value or a tuple by
# successful call.
retval = func( *args, **kwargs )
return supported, retval
def test_subfunc_func_1_retval( self ):
retval = self._assert_sub_functionality( subfunc_func_1_retval, <args>, <kwargs> )
issupported = retval[0]
if issupported:
real_retval = retval[1]
<tests with real_retval>
def test_subfunc_func_2_retvals( self ):
retval = self._assert_sub_functionality( subfunc_func_2_retvals, <args>, <kwargs> )
issupported = retval[0]
if issupported:
(real_retval1, real_retval2) = retval[1]
<checks with the real_retvals>
Kudos to both. (Solution check should also be before David's comment ;-) ).

Workaround for equality of nested functions

I have a nested function that I'm using as a callback in pyglet:
def get_stop_function(stop_key):
def stop_on_key(symbol, _):
if symbol == getattr(pyglet.window.key, stop_key):
pyglet.app.exit()
return stop_on_key
pyglet.window.set_handler('on_key_press', get_stop_function('ENTER'))
But then I run into problems later when I need to reference the nested function again:
pyglet.window.remove_handler('on_key_press', get_stop_function('ENTER'))
This doesn't work because of the way python treats functions:
my_stop_function = get_stop_function('ENTER')
my_stop_function is get_stop_function('ENTER') # False
my_stop_function == get_stop_function('ENTER') # False
Thanks to two similar questions I understand what is going on but I'm not sure what the workaround is for my case. I'm looking through the pyglet source code and it looks like pyglet uses equality to find the handler to remove.
So my final question is: how can I override the inner function's __eq__ method (or some other dunder) so that identical nested functions will be equal?
(Another workaround would be to store a reference to the function myself, but that is duplicating pyglet's job, will get messy with many callbacks, and anyways I'm curious about this question!)
Edit: actually, in the questions I linked above, it's explained that methods have value equality but not reference equality. With nested functions, you don't even get value equality, which is all I need.
Edit2: I will probably accept Bi Rico's answer, but does anyone know why the following doesn't work:
def get_stop_function(stop_key):
def stop_on_key(symbol, _):
if symbol == getattr(pyglet.window.key, stop_key):
pyglet.app.exit()
stop_on_key.__name__ = '__stop_on_' + stop_key + '__'
stop_on_key.__eq__ = lambda x: x.__name__ == '__stop_on_' + stop_key + '__'
return stop_on_key
get_stop_function('ENTER') == get_stop_function('ENTER') # False
get_stop_function('ENTER').__eq__(get_stop_function('ENTER')) # True
You could create a class for your stop functions and define your own comparison method.
class StopFunction(object):
def __init__(self, stop_key):
self.stop_key = stop_key
def __call__(self, symbol, _):
if symbol == getattr(pyglet.window.key, self.stop_key):
pyglet.app.exit()
def __eq__(self, other):
try:
return self.stop_key == other.stop_key
except AttributeError:
return False
StopFunciton('ENTER') == StopFunciton('ENTER')
# True
StopFunciton('ENTER') == StopFunciton('FOO')
# False
the solution is to keep a dictionary containing the generated functions around,
so that when you make the second call, you get the same object as in the first call.
That is, simply build some memoization logic, or use one of the libraries
existing with memoizing decorators:
ALL_FUNCTIONS = {}
def get_stop_function(stop_key):
if not stop_key in ALL_FUNCTIONS:
def stop_on_key(symbol, _):
if symbol == getattr(pyglet.window.key, stop_key):
pyglet.app.exit()
ALL_FUNCTIONS[stop_key] = stop_on_key
else:
stop_on_key = ALL_FUNCTIONS[stop_key]
return stop_on_key
You can generalize Bi Rico's solution to allow wrapping any functions up with some particular equality function pretty easily.
The first problem is defining what the equality function should check. I'm guessing for this case, you want the code to be identical (meaning functions created from the same def statement will be equal, but two functions created from character-for-character copies of the def statement will not), and the closures to be equal (meaning that if you call get_stop_function with two equal but non-identical stop_keys the functions will be equal), and nothing else to be relevant. But that's just a guess, and there are many other possibilities.
Then you just wrap a function the same way you'd wrap any other kind of object; just make sure __call__ is one of the things you delegate:
class EqualFunction(object):
def __init__(self, f):
self.f = f
def __eq__(self, other):
return (self.__code__ == other.__code__ and
all(x.cell_contents == y.cell_contents
for x, y in zip(self.__closure__, other.__closure__)))
def __getattr__(self, attr):
return getattr(self.f, attr)
def __call__(self, *args, **kwargs):
return self.f(*args, **kwargs)
If you want to support other dunder methods that aren't required to go through getattr (I don't think any of them are critical for functions, but I could be wrong…), either do it explicitly (as with __call__) or loop over them and add a generic wrapper to the type for each one.
To use the wrapper:
def make_f(i):
def f():
return i
return EqualFunction(f)
f1 = f(0)
f2 = f(0.0)
assert f1 == f2
Or, notice that EqualFunction actually works as a decorator, which may be more readable.
So, for your code:
def get_stop_function(stop_key):
#EqualFunction
def stop_on_key(symbol, _):
if symbol == getattr(pyglet.window.key, stop_key):
pyglet.app.exit()
return stop_on_key

Categories

Resources