I have a complex library, which users can add functions to. This library is then used in a program which then accepts input. Problem is, the functions aren't being processed the way I want them to. Let me illustrate (I've simplified the code to illustrate the salient points):
import functools
def register_command(func):
def wrapper(*args, **kwargs):
func(*args, **kwargs)
return wrapper
def execute_command(func, *args, **kwargs)):
return functools.partial(func, *args, **kwargs))
'''DECORATORS: EASY WAY FOR THE USER TO ADD NEW FUNCTIONS TO THE LIBRARY'''
#register_command
def some_math_function(variable):
return variable + 1
'''THIS IS THE PROGRAM WHICH USES THE ABOVE LIBRARY & USER-CREATED FUNCTIONS'''
input_var = input("input your variable: ")
response = execute_command(register_command, input_var)
print(response())
So, if the user were to input '10', input_var = 10, I expect the algorithm to return 11. Instead I get this:
<function register_command.<locals>.wrapper at 0x110368ea0>
Where am I going wrong?
Let's look at what decorators do first. A decorator accepts an object (could be a function, class, or possibly something else), does something, and returns a replacement object that gets reassigned to the name of the original.
In that context, register_function does nothing useful. It returns a wrapper that calls the unmodified original. Based on the name, you'd probably want to record the function in some registry, like a dictionary:
function_registry = {}
def register_command(func):
function_registry[func.__name__] = func
return func
Remember, the things that a decorator does don't have to affect the function directly (or at all). And there's nothing wrong with simply returning the input function.
Now let's take a look at using the registry we just made. Your current execute_command does not execute anything. It creates and returns a callable object which would call the decorated function if you were to invoke it with no arguments. execute_command does not actually call the result.
You would probably want a function with a name like execute_command to look up a command by name, and run it:
def execute_command(name, *args, **kwargs)):
return function_registry[name](*args, **kwargs)
So now you can do something like
#register_command
def some_math_function(variable):
return variable + 1
That will leave the function unmodified, but add an entry to the registry that maps the name 'some_math_function' to the function object some_math_function.
Your program becomes
func_name = input('what function do you want to run? ') # enter some_math_func
input_var = int(input("input your variable: "))
response = execute_command(func_name, input_var)
print(response())
This example does not perform any type checking, keep track of the number of arguments, or otherwise allow you to get creative with calling the functions, but it will get you started with decorators.
So here's a rudimentary system for converting a fixed number of inputs to the required types. Arguments are converted in the order provided. The function is responsible for raising an error if you didn't supply enough inputs. Keywords aren't supported:
function_registry = {}
def register_function(*converters):
def decorator(func):
def wrapper(*args):
real_args = (c(arg) for c, arg in zip(converters, args))
return func(*real_args)
function_registry[func.__name__] = wrapper
return wrapper
return decorator
#registrer_function(int)
def some_math_func(variable):
return variable + 1
def execute_command(name, *args):
return function_registry[name](*args)
command = input("command me: ") # some_math_func 1
name, *args = command.split()
result = execute_command(name, *args)
print(result)
Related
I'm developing a UI in python for maya, and I'm trying to do a function that performs an action when a frameLayout expands, in this case I have several "frameLayout" objects and I would like to use a single function "fl_expand", instead of one for each object
def fl_expand(*args):
print args
with frameLayout("layout1", collapsable=True, expandCommand=fl_expand):
...
with frameLayout("layout2", collapsable=True, expandCommand=fl_expand):
...
but I don't know how to pass the instance name as argument to the function, I tried:
with frameLayout("layout1", collapsable=True, expandCommand=fl_expand("layout1")):
...
But of course I get this error:
# Error: TypeError: file /usr/autodesk/maya2018/lib/python2.7/site-packages/pymel/internal/pmcmds.py line 134: Invalid arguments for flag 'ec'. Expected string or function, got NoneType #
Currently, you have something like that:
def fl_expand(*args):
print(args)
def frameLayout(name, collapsable, expandCommand):
print("frameLayout: " + name)
expandCommand('foo')
frameLayout("layout1", collapsable=True, expandCommand=fl_expand)
What you want is to call the fl_expand function with the first argument already filled with the name of the layout. To do that, you can use a partiel function. See the documentation for functools.partial.
You can try:
import functools
frameLayout("layout1", collapsable=True, expandCommand=functools.partial(fl_expand, "layout1"))
Of course, it could be laborious if you have a lot of calls like that. You can also define your own frameLayout function:
def myFrameLayout(name, collapsable, expandCommand):
cmd = functools.partial(fl_expand, name)
return frameLayout(name, collapsable, cmd)
myFrameLayout("layout2", collapsable=True, expandCommand=fl_expand)
I'm using #functools.lru_cache in Python 3.3. I would like to save the cache to a file, in order to restore it when the program will be restarted. How could I do?
Edit 1 Possible solution: We need to pickle any sort of callable
Problem pickling __closure__:
_pickle.PicklingError: Can't pickle <class 'cell'>: attribute lookup builtins.cell failed
If I try to restore the function without it, I get:
TypeError: arg 5 (closure) must be tuple
You can't do what you want using lru_cache, since it doesn't provide an API to access the cache, and it might be rewritten in C in future releases. If you really want to save the cache you have to use a different solution that gives you access to the cache.
It's simple enough to write a cache yourself. For example:
from functools import wraps
def cached(func):
func.cache = {}
#wraps(func)
def wrapper(*args):
try:
return func.cache[args]
except KeyError:
func.cache[args] = result = func(*args)
return result
return wrapper
You can then apply it as a decorator:
>>> #cached
... def fibonacci(n):
... if n < 2:
... return n
... return fibonacci(n-1) + fibonacci(n-2)
...
>>> fibonacci(100)
354224848179261915075L
And retrieve the cache:
>>> fibonacci.cache
{(32,): 2178309, (23,): 28657, ... }
You can then pickle/unpickle the cache as you please and load it with:
fibonacci.cache = pickle.load(cache_file_object)
I found a feature request in python's issue tracker to add dumps/loads to lru_cache, but it wasn't accepted/implemented. Maybe in the future it will be possible to have built-in support for these operations via lru_cache.
You can use a library of mine, mezmorize
import random
from mezmorize import Cache
cache = Cache(CACHE_TYPE='filesystem', CACHE_DIR='cache')
#cache.memoize()
def add(a, b):
return a + b + random.randrange(0, 1000)
>>> add(2, 5)
727
>>> add(2, 5)
727
Consider using joblib.Memory for persistent caching to the disk.
Since the disk is enormous, there's no need for an LRU caching scheme.
You are not supposed to touch anything inside the decorator implementation except for the public API so if you want to change its behavior you probably need to copy its implementation and add necessary functions yourself. Note that the cache is currently stored as a circular doubly linked list so you will need to take care when saving and loading it.
This is something that I wrote might be helpful devcache.
It's designed to help you speed up iterations for long running methods. It's configurable with a config file
#devcache(group='crm')
def my_method(a, b, c):
...
#devcache(group='db')
def another_method(a, b, c):
...
The cache can be refreshed or used with a yaml config file like:
refresh: false # refresh true will ignore use_cache and refresh all cached data
props:
1:
group: crm
use_cache: false
2:
group: db
use_cache: true
Would refresh the cache for my_method and use the cache for another_method.
It's not going to help you pickle the the callable but it does the caching part and would be straight forward to modify the code to add specialized serialization.
If your use-case is to cache the result of computationally intensive functions in your pytest test suites, pytest already has a file-based cache. See the docs for more info.
This being said, I had a few extra requirements:
I wanted to be able to call the cached function directly in the test instead of from a fixture
I wanted to cache complex python objects, not just simple python primitives/containers
I wanted an implementation that could refresh the cache intelligently (or be forced to invalidate only a single key)
Thus I came up with my own wrapper for the pytest cache, which you
can find below. The implementation is fully documented, but if you
need more info let me know and I'll be happy to edit this answer :)
Enjoy:
from base64 import b64encode, b64decode
import hashlib
import inspect
import pickle
from typing import Any, Optional
import pytest
__all__ = ['cached']
#pytest.fixture
def cached(request):
def _cached(func: callable, *args, _invalidate_cache: bool = False, _refresh_key: Optional[Any] = None, **kwargs):
"""Caches the result of func(*args, **kwargs) cross-testrun.
Cache invalidation can be performed by passing _invalidate_cache=True or a _refresh_key can
be passed for improved control on invalidation policy.
For example, given a function that executes a side effect such as querying a database:
result = query(sql)
can be cached as follows:
refresh_key = query(sql=fast_refresh_sql)
result = cached(query, sql=slow_or_expensive_sql, _refresh_key=refresh_key)
or can be directly invalidated if you are doing rapid iteration of your test:
result = cached(query, sql=sql, _invalidate_cache=True)
Args:
func (callable): Callable that will be called
_invalidate_cache (bool, optional): Whether or not to invalidate_cache. Defaults to False.
_refresh_key (Optional[Any], optional): Refresh key to provide a programmatic way to invalidate cache. Defaults to None.
*args: Positional args to pass to func
**kwargs: Keyword args to pass to func
Returns:
_type_: _description_
"""
# get debug info
# see https://stackoverflow.com/a/24439444/4442749
try:
func_name = getattr(func, '__name__', repr(func))
except:
func_name = '<function>'
try:
caller = inspect.getframeinfo(inspect.stack()[1][0])
except:
func_name = '<file>:<lineno>'
call_key = _create_call_key(func, None, *args, **kwargs)
cached_value = request.config.cache.get(call_key, {"refresh_key": None, "value": None})
value = cached_value["value"]
current_refresh_key = str(b64encode(pickle.dumps(_refresh_key)), encoding='utf8')
cached_refresh_key = cached_value.get("refresh_key")
if (
_invalidate_cache # force invalidate
or cached_refresh_key is None # first time caching this call
or current_refresh_key != cached_refresh_key # refresh_key has changed
):
print("Cache invalidated for '%s' # %s:%d" % (func_name, caller.filename, caller.lineno))
result = func(*args, **kwargs)
value = str(b64encode(pickle.dumps(result)), encoding='utf8')
request.config.cache.set(
key=call_key,
value={
"refresh_key": current_refresh_key,
"value": value
}
)
else:
print("Cache hit for '%s' # %s:%d" % (func_name, caller.filename, caller.lineno))
result = pickle.loads(b64decode(bytes(value, encoding='utf8')))
return result
return _cached
_args_marker = object()
_kwargs_marker = object()
def _create_call_key(func: callable, refresh_key: Any, *args, **kwargs):
"""Produces a hex hash str of the call func(*args, **kwargs)"""
# producing a key from func + args
# see https://stackoverflow.com/a/10220908/4442749
call_key = pickle.dumps(
(func, refresh_key) +
(_args_marker, ) +
tuple(args) +
(_kwargs_marker,) +
tuple(sorted(kwargs.items()))
)
# create a hex digest of the key for the filename
m = hashlib.sha256()
m.update(bytes(call_key))
return m.digest().hex()
I am creating a decorator that catches a raised error in it's target function, and allows the user to continue executing the script (bypassing the function) or drop out of the script.
def catch_error(func):
"""
This decorator is used to make sure that if a decorated function breaks
in the execution of a script, the script doesn't automatically crash.
Instead, it gives you the choice to continue or gracefully exit.
"""
def caught(*args):
try:
return func(*args)
except Exception as err:
question = '\n{0} failed. Continue? (yes/no): '.format(func.func_name)
answer = raw_input(question)
if answer.lower() in ['yes','y']:
pass
else:
print " Aborting! Error that caused failure:\n"
raise err
return None
return caught
Notice that, if the user chooses to bypass the error-returning function and continue executing the script, the decorator returns None. This works well for functions that only return a single value, but it is crashing on functions that attempt to unpack multiple values. For instance,
# Both function and decorator return single value, so works fine
one_val = decorator_works_for_this_func()
# Function nominally returns two values, but decorator only one, so this breaks script
one_val, two_val = decorator_doesnt_work_for_this_func()
Is there a way that I can determine the number of values my target function is supposed to return? For instance, something like:
def better_catch_error(func):
def caught(*args):
try:
return func(*args)
except Exception as err:
...
num_rvals = determine_num_rvals(func)
if num_rvals > 1:
return [ None for count in range(num_rvals) ]
else:
return None
return caught
As always, if there is a better way to do this sort of thing, please let me know. Thanks!
UPDATE:
Thanks for all the suggestions. I decided to narrow the scope of catch_error to a single class of functions, which only return one string value. I just split all the functions returning more than one value into separate functions that return a single value to make them compatible. I had been hoping to make catch_error more generic (and there were several helpful suggestions on how to do that), but for my application it was a little overkill. Thanks again.
Martijn Pieters answer is correct, this is a specific case of the Halting Problem
However you might get around it by passing a error return value to the decorator. Something like this:
def catch_error(err_val):
def wrapper(func):
def caught(*args):
try:
return func(*args)
except Exception as err:
question = '\n{0} failed. Continue? (yes/no): '.format(func.func_name)
answer = raw_input(question)
if answer.lower() in ['yes','y']:
pass
else:
print " Aborting! Error that caused failure:\n"
raise err
return err_val
return caught
return wrapper
Then you could decorate using:
#catch_error({})
def returns_a_dict(*args, **kwargs):
return {'a': 'foo', 'b': 'bar'}
Also as a point of style, if you are grabbing *args in your wrapped function, you should probably also grab **kwargs so that you can properly wrap functions that take keyword arguments. Otherwise your wrapped function will fail if you call wrapped_function(something=value)
Finally, as another point of style, it is confusing to see code that does if a: pass with an else. Try using if !a in these cases. So the final code:
def catch_error(err_val):
def wrapper(func):
def caught(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as err:
question = '\n{0} failed. Continue? (yes/no): '.format(func.func_name)
answer = raw_input(question)
if answer.lower() not in ['yes','y']:
print " Aborting! Error that caused failure:\n"
raise err
return err_val
return caught
return wrapper
No, there is no way you can determine that.
Python is a dynamic language, and a given function can return an arbitrary sequence of any size or no sequence at all, based on the inputs and other factors.
For example:
import random
def foo():
if random.random() < 0.5:
return ('spam', 'eggs')
return None
This function will return a tuple half of the time, but None the other half, so Python has no way of telling you what foo() will return.
There are many more ways your decorator can fail, btw, not just when the function returns a sequence that the caller then unpacks. What if the caller expected a dictionary and starts trying to access keys, or a list and wants to know the length?
Your decorator can't predict what your function is going to return, but nothing prevents you from telling the decorator what return signature to simulate:
#catch_error([None, None])
def tuple_returner(n):
raise Exception
return [2, 3]
Instead of returning None, your decorator will return its argument ([None, None]).
Writing an argument-taking decorator is just slightly tricky: The expression catch_error([None, None]) will be evaluated, and must return the actual decorator that will be applied to the decorated function. It looks like this:
def catch_error(signature=None):
def _decorator(func):
def caught(*args):
try:
return func(*args)
except Exception as err:
# Interactive code suppressed
return signature
return caught
return _decorator
Note that even if you just want it to return None, you need to execute it once:
#catch_error()
def some_function(x):
...
This question is a follow-up to this brilliant answer on decorators in Python :
I use the given "snippet to make any decorator accept generically any argument".
Then I have this (here simplified) decorator:
#decorator_with_args
def has_permission_from_kwarg(func, *args, **kwargs):
"""Decorator to check simple access/view rights by the kwarg."""
def wrapper(*args_1, **kwargs_1):
if 'kwarg' in kwargs_1:
kwarg = kwargs_1['kwarg']
else:
raise HTTP403Error()
return func(*args_1, **kwargs_1)
return wrapper
Working with this decorator, no problem it does the job very well.
Testing a similar decorator that does not require absolutely the kwargs, same outcome.
But testing this decorator with the following mock does not work:
def test_can_access_kwarg(self):
"""Test simple permission decorator."""
func = Mock(return_value='OK')
decorated_func = has_permission_from_slug()(func(kwarg=self.kwarg))
# It will raise at the following line, whereas the kwarg is provided...
response = decorated_func()
self.assertTrue(func.called)
self.assertEqual(response, 'OK')
It returns me the exception I am raising when I do not have a 'kwarg' keyword-argument...
Does anyone has a clue how to test (by mocking would be preferable) such a decorator decorated by another decorator that requires the access to one of the keyword arguments passed to the function ?
decorated_func = has_permission_from_slug()(func(kwarg=self.kwarg))
This will:
Execute func(kwarg=self.kwarg)
Generate an instance of the actual decorator.
Call the decorator on the result of the func-call (i.e. the result).
Return the wrapper which will then later try to call the results from step 3 (which would fail).
response = decorated_func()
This will then call the returned wrapper with no arguments, so **kwargs_1 is empty. Also, if you wrapper wouldn’t raise an exception in this case, the subsequent call of func(..) would throw an exception because the return value of func (the original one) is probably not callable (see above).
What you probably want to do instead, or at least what your decorator supports, is this:
decorated_func = has_permission_from_kwarg()(func)
response = decorated_func(kwarg=self.kwarg)
Or, if you want to pass your kwarg in the decorator like this:
decorated_func = has_permission_from_kwarg(kwarg=self.kwarg)(func)
response = decorated_func()
Then you need to adjust or decorator to actually use kwargs, and not kwargs_1 in the check (the latter are the arguments to the decorated function).
I’m testing your original decorator definition (with no changes) and the decorator_with_args as defined in the linked answer with the following code:
class HTTP403Error (Exception):
pass
def func (*args, **kwargs):
print('func {}; {}'.format(args, kwargs))
my_kwarg = 'foo'
decorated_func = has_permission_from_kwarg()(func)
decorated_func(kwarg=my_kwarg)
decorated_func(not_kwarg=my_kwarg)
As expected, I get func (); {'kwarg': 'foo'} printed for the first call, and a HTTP403 exception for the second.
I am working on a quick python script using the cmd module that will allow the user to enter text commands followed by parameters in basic url query string format. The prompts will be answered with something like
commandname foo=bar&baz=brack
Using cmd, I can't seem to find which method to override to affect the way the argument line is handed off to all the do_* methods. I want to run urlparse.parse_qs on these values, and calling this upon line in every do_* method seems clumsy.
The precmd method gets the whole line, before the commandname is split off and interpreted, so this will not work for my purposes. I'm also not terribly familiar with how to place a decorator inside a class like this and haven't been able to pull it off without breaking the scope.
Basically, the python docs for cmd say the following
Repeatedly issue a prompt, accept input, parse an initial prefix off
the received input, and dispatch to action methods, passing them the
remainder of the line as argument.
I want to make a method that will do additional processing to that "remainder of the line" and hand that generated dictionary off to the member functions as the line argument, rather than interpreting them in every function.
Thanks!
You could potentially override the onecmd() method, as the following quick example shows. The onecmd() method there is basically a copy of the one from the original cmd.py, but adds a call to urlparse.parse_qs() before passing the arguments to a function.
import cmd
import urlparse
class myCmd(cmd.Cmd):
def onecmd(self, line):
"""Mostly ripped from Python's cmd.py"""
cmd, arg, line = self.parseline(line)
arg = urlparse.parse_qs(arg) # <- added line
if not line:
return self.emptyline()
if cmd is None:
return self.default(line)
self.lastcmd = line
if cmd == '':
return self.default(line)
else:
try:
func = getattr(self, 'do_' + cmd)
except AttributeError:
return self.default(line)
return func(arg)
def do_foo(self, arg)
print arg
my_cmd = myCmd()
my_cmd.cmdloop()
Sample output:
(Cmd) foo
{}
(Cmd) foo a b c
{}
(Cmd) foo a=b&c=d
{'a': ['b'], 'c': ['d']}
Is this what you are trying to achieve?
Here's another potential solution that uses a class decorator to modify a
cmd.Cmd subclass and basically apply a decorator function to all do_*
methods of that class:
import cmd
import urlparse
import types
# function decorator to add parse_qs to individual functions
def parse_qs_f(f):
def f2(self, arg):
return f(self, urlparse.parse_qs(arg))
return f2
# class decorator to iterate over all attributes of a class and apply
# the parse_qs_f decorator to all do_* methods
def parse_qs(cls):
for attr_name in dir(cls):
attr = getattr(cls, attr_name)
if attr_name.startswith('do_') and type(attr) == types.MethodType:
setattr(cls, attr_name, parse_qs_f(attr))
return cls
#parse_qs
class myCmd(cmd.Cmd):
def do_foo(self, args):
print args
my_cmd = myCmd()
my_cmd.cmdloop()
I quickly cobbled this together and it appears to work as intended, however, I'm
open to suggestions on any pitfalls or how this solution could be improved.