Is there a way to forward function arguments without hiding the fact that the original call did or did not provide optional arguments?
def func1(a=x):
# do stuff
def func2(b=y):
# pass args to func1 without masking func1 defaults
return func1(?)
A call to func2() should result in func1() being called without arguments or at least with its default arguments, whatever they may be.
The following almost works but fundamentally I don't know if there is a way for func2 to determine if its defaults were invoked on not.
def func2(b=y):
# this comes close but what if func2(y) is called?
if b == y:
return func1()
else:
return func1(b)
The usual way of determining if a parameter is left off is to use None as the default. It's unlikely that you'll be calling a function with None so it's a useful marker.
def func2(b=None):
if b is None:
return func1()
else:
return func1(b)
I suspect the right way to do this is to have your func2 function use a sentinel value as its default argument, so you can recognize it easily. If you get that sentinel, you can set up the arguments you'll pass on to func1 however you want (e.g. not passing any argument). You can use argument unpacking to handle passing a variable number of arguments (such as 0-1).
A common sentinel is None, though if that could be a meaningful value for a caller to pass, you may want to use something else (an instance of object is a common choice). Here's an example:
def func1(a="default value"): # lets assume we don't know what this default is
# do stuff with a
# later, perhaps in a different module
_sentinel = object() # our sentinel object
def func2(b=_sentinel):
if b is _sentinel: # test for the sentinel
b = "some useful value"
a_args = () # arguments to func1 is an empty tuple
else:
a_args = (b,) # pack b into a 1-tuple
# do stuff with b perhaps
func1(*a_args) # call func1 with appropriate arguments (either b or nothing)
Note that this design is relatively silly. Most of the time you'll either call func1 with an argument in all cases, or you'll call it without an argument in all cases. You rarely need to conditionally pass an argument like this.
See this answer:
https://stackoverflow.com/a/2088101/933416
There is no way to get the information you want from the internals. To detect whether defaults were used, you would need to re-implement the internal default argument processing within the function, i.e.:
def func2(*args, **kwargs):
if len(args) == 0 and "b" not in kwargs:
b = y
return func1()
else:
return func1(b)
Now from the first check we guarantee that func2() was called as opposed to func2(y) or func2(b=y). In almost every case, the unique object sentinel is good enough to avoid having to truly guarantee how it was called, but it can be done.
But judging from the fact that you immediately return the result of func1, I see no reason why func2 even has default arguments. In the default call (func2()), that y is never used. So why is it there? Why don't you just use define func2(*a, **k) and pass them directly to func1?
Argument forwarding should be done with variadic arguments:
def func2(*args, **kwargs):
func1(*args, **kwargs)
Everything will just work, although introspection can suffer a bit.
If you need to sometimes not pass on an argument, you can remove an argument whenever:
del kwargs["name"]
An example:
def print_wrapper(*args, extrabig=False, **kwargs):
if extrabig:
args = [arg*2 for arg in args]
kwargs["sep"] = kwargs.get("sep", " ") * 2
print(*args, **kwargs)
print_wrapper(2, 4, 8, end="!!!\n")
#>>> 2 4 8!!!
print_wrapper(2, 4, 8, sep=", ", end="!!!\n")
#>>> 2, 4, 8!!!
print_wrapper(2, 4, 8, extrabig=True, end="!!!\n")
#>>> 4 8 16!!!
If you really don't want to do this (although you'd be wrong), you can use object to generate a unique sentinel.
# Bad! Won't let you print None
def optionally_print_one_thing(thing=None):
if thing is not None:
print(thing)
# Better
_no_argument = object()
def optionally_print_one_thing(thing=_no_argument):
if thing is not _no_argument:
print(thing)
What is your exact use case? func2 should be smart enough to only pass on the appropriate params to func1, and that should rely on the default values of any parameters.
The only time I have ever found it necessary to change how func2 calls func1 is when func1 is a c function with a screwy signature:
def func2(this, that, those=None):
if those is None:
return func1(this, that)
else:
return func1(this, that, those)
Related
I only just started learning Python and found out that I can pass a function as the parameter of another function. Now if I call foo(bar()) it will not pass as a function pointer but the return value of the used function. Calling foo(bar) will pass the function, but this way I am not able to pass any additional arguments. What if I want to pass a function pointer that calls bar(42)?
I want the ability to repeat a function regardless of what arguments I have passed to it.
def repeat(function, times):
for calls in range(times):
function()
def foo(s):
print s
repeat(foo("test"), 4)
In this case the function foo("test") is supposed to be called 4 times in a row.
Is there a way to accomplish this without having to pass "test" to repeat instead of foo?
You can either use a lambda:
repeat(lambda: bar(42))
Or functools.partial:
from functools import partial
repeat(partial(bar, 42))
Or pass the arguments separately:
def repeat(times, f, *args):
for _ in range(times):
f(*args)
This final style is quite common in the standard library and major Python tools. *args denotes a variable number of arguments, so you can use this function as
repeat(4, foo, "test")
or
def inquisition(weapon1, weapon2, weapon3):
print("Our weapons are {}, {} and {}".format(weapon1, weapon2, weapon3))
repeat(10, inquisition, "surprise", "fear", "ruthless efficiency")
Note that I put the number of repetitions up front for convenience. It can't be the last argument if you want to use the *args construct.
(For completeness, you could add keyword arguments as well with **kwargs.)
You will need to pass the parameters for foo, to the repeat function:
#! /usr/bin/python3.2
def repeat (function, params, times):
for calls in range (times):
function (*params)
def foo (a, b):
print ('{} are {}'.format (a, b) )
repeat (foo, ['roses', 'red'], 4)
repeat (foo, ['violets', 'blue'], 4)
While many of the answers here are good, this one might be helpful because it doesn't introduce any unnecessary repetition and the reason for callbacks in the first place is often to synchronize with other work outside of the main UI thread.
Enjoy!
import time, threading
def callMethodWithParamsAfterDelay(method=None, params=[], seconds=0.0):
return threading.Timer(seconds, method, params).start()
def cancelDelayedCall(timer):
timer.cancel()
# Example
def foo (a, b):
print ('{} are {}'.format (a, b) )
callMethodWithParametersAfterDelay(foo, ['roses', 'red'], 0)
I'm trying to use a dictionary as a switch statement as in
def add(first, second):
return first + second
def sub():
...
return something
operations = {
"Add": add,
"Sub": sub
}
ret_val = operations[operation]
Now how can I pass the arguments to add and sub and get their response? Currently, I don't pass anything to the methods, and testing the ret_val. What I see is the operation getting called, but the return doesn't come back. What I get is the pointer to the operation method.
Thanks!
To call a function, put the arguments in parentheses after it, just like when you call a function directly by its name.
ret_val = operations[operation](1, 2)
Note that for this to work properly, all the functions in operations need to take the same number of arguments. So it won't work if add() takes two arguments but sub() takes none, as you've shown.
If the functions can take different numbers of arguments, you could put the arguments in a list and use the unpacking operator.
args = (1, 2)
ret_val = operations[operation](*args)
Then you just have to ensure that args contains the appropriate number of arguments for the particular operation.
The dictionary contains callable functions. To call them, just add the arguments in parentheses.
operations[operation](arg1, ...)
So, the main thing you're missing is executing the function call. The code as provided grabs the function reference properly, but you need parens to execute it.
Once you execute it, you need some way to pass arguments. Because the number of args varies by function, the best way is to pass both a variable number of args list (*args) and a dictionary of keyword arguments (**kwargs).
I've filled in your pseudocode slightly so these run:
def add(first, second):
return first + second
def sub(first, second):
return first - second
operations = {
"Add": add,
"Sub": sub,
}
Call add with args:
op = 'Add'
op_args = [1, 2]
op_kwargs = {}
ret_val = operations[operation](*op_args, **op_kwargs)
print(ret_val)
3
Call add with kwargs:
op = 'Add'
op_args = []
op_kwargs = {'first': 3, 'second': 4}
ret_val = operations[operation](*op_args, **op_kwargs)
print(ret_val)
7
If you try to pass both args and kwargs in a conflicting way, it will fail:
# WON'T WORK
op = 'Add'
op_args = [1, 2]
op_kwargs = {'first': 3, 'second': 4}
ret_val = operations[operation](*op_args, **op_kwargs)
print(ret_val)
TypeError: add() got multiple values for argument 'first'
But you can use both in a complementary way:
op = 'Add'
op_args = [1]
op_kwargs = {'second': 4}
ret_val = operations[operation](*op_args, **op_kwargs)
print(ret_val)
5
One technical note is that the naming args and kwargs is purely convention in Python. You could call them whatever you want. An answer that discusses the two more is available here: https://stackoverflow.com/a/36908/149428.
Note that I did not do any input validation, etc for the purpose of a simple, focused answer. If you're getting input from a user, that's an important step to remember.
I am trying to create a set of functions in python that will all do a similar operation on a set of inputs. All of the functions have one input parameter fixed and half of them also need a second parameter. For the sake of simplicity, below is a toy example with only two functions.
Now, I want, in my script, to run the appropriate function, depending on what the user input as a number. Here, the user is the random function (so the minimum example works). What I want to do is something like this:
def function_1(*args):
return args[0]
def function_2(*args):
return args[0] * args[1]
x = 10
y = 20
i = random.randint(1,2)
f = function_1 if i==1 else function_2
return_value = f(x,y)
And it works, but it seems messy to me. I would rather have function_1 defined as
def function_1(x):
return x
Another way would be to define
def function_1(x,y):
return x
But that leaves me with a dangling y parameter.
but that will not work as easily. Is my way the "proper" way of solving my problem or does there exist a better way?
There are couple of approaches here, all of them adding more boiler-plate code.
There is also this PEP which may be interesting to you.
But 'pythonic' way of doing it is not as elegant as usual function overloading due to the fact that functions are just class attributes.
So you can either go with function like that:
def foo(*args):
and then count how many args you've got which will be very broad but very flexible as well.
another approach is the default arguments:
def foo(first, second=None, third=None)
less flexible but easier to predict, and then lastly you can also use:
def foo(anything)
and detect the type of anything in your function acting accordingly.
Your monkey-patching example can work too, but it becomes more complex if you use it with class methods, and does make introspection tricky.
EDIT: Also, for your case you may want to keep the functions separate and write single 'dispatcher' function that will call appropriate function for you depending on the arguments, which is probably best solution considering above.
EDIT2: base on your comments I believe that following approach may work for you
def weigh_dispatcher(*args, **kwargs):
#decide which function to call base on args
if 'somethingspecial' in kwargs:
return weight2(*args, **kwargs)
def weight_prep(arg):
#common part here
def weight1(arg1, arg2):
weitht_prep(arg1)
#rest of the func
def weight2(arg1, arg2, arg3):
weitht_prep(arg1)
#rest of the func
alternatively you can move the common part into the dispatcher
You may also have a function with optional second argument:
def function_1(x, y = None):
if y != None:
return x + y
else:
return x
Here's the sample run:
>>> function_1(3)
3
>>> function_1(3, 4)
7
Or even optional multiple arguments! Check this out:
def function_2(x, *args):
return x + sum(args)
And the sample run:
>>> function_2(3)
3
>>> function_2(3, 4)
7
>>> function_2(3, 4, 5, 6, 7)
25
You may here refer to args as to list:
def function_3(x, *args):
if len(args) < 1:
return x
else:
return x + sum(args)
And the sample run:
>>> function_3(1,2,3,4,5)
15
For example, I have a basic method that will return a list of permutations.
import itertools
def perms(elements, set_length=elements):
data=[]
for x in range(elements):
data.append(x+1)
return list(itertools.permutations(data, set_length))
Now I understand, that in its current state this code won't run because the second elements isn't defined, but is there and elegant way to accomplish what I'm trying to do here? If that's still not clear, I want to make the default setLength value equal to the first argument passed in. Thanks.
No, function keyword parameter defaults are determined when the function is defined, not when the function is executed.
Set the default to None and detect that:
def perms(elements, setLength=None):
if setLength is None:
setLength = elements
If you need to be able to specify None as a argument, use a different sentinel value:
_sentinel = object()
def perms(elements, setLength=_sentinel):
if setLength is _sentinel:
setLength = elements
Now callers can set setLength to None and it won't be seen as the default.
Because of the way Python handles bindings and default parameters...
The standard way is:
def perms(elements, setLength=None):
if setLength is None:
setLength = elements
And another option is:
def perms(elements, **kwargs):
setLength = kwargs.pop('setLength', elements)
Although this requires you to explicitly use perms(elements, setLength='something else') if you don't want a default...
You should do something like :
def perms(elements,setLength=None):
if setLength is None:
setLength = elements
Answer 1:
The solution from above looks like this:
def cast_to_string_concat(a, b, c=None):
c = a if c is None else c
return str(a) + str(b) + str(c)
While this approach will solve a myriad of potential problems, (and maybe yours)! I wanted to write a function where a possible input for variable "c" is indeed the singleton None, so I had to do more digging.
To explain that further, calling the function with the following variables:
A='A'
B='B'
my_var = None
Yields:
cast_to_string_concat(A, B, my_var):
>>>'ABA'
Whereas the user might expect that since they called the function with three variables, then it should print the three variables, like this:
cast_to_string_concat(A, B, my_var):
>>> 'ABNone' # simulated and expected outcome
So, this implementation ignores the third variable, even when it was declared, so this means the function no longer has the ability to determine whether or not variable "c" was defined.
So, for my use case, a default value of None would not quite do the trick.
For the answers that suggest this solution, read these:
Is there a way to set a default parameter equal to another parameter value?
Python shortcut for variable default value to be another variable value if it is None
Function argument's default value equal to another argument
What is the pythonic way to avoid default parameters that are empty lists?
But, if that doesn't work for you, then maybe keep reading!
A comment in the first link above mentions using a _sentinel defined by object().
So this solution removes the use of a None, and replaces it with the object() through using the implied private sentinel.
Answer 2:
_sentinel = object()
def cast_to_string_concat(a, b, c=_sentinel):
c = a if c == _sentinel else c
return str(a) + str(b) + str(c)
A='A'
B='B'
C='C'
cast_to_string_append(A,B,C)
>>> 'ABC'
cast_to_string_concat(A,B)
>>> 'ABA'
So this is pretty awesome! It correctly handles the above edge case! See for yourself:
A='A'
B='B'
C = None
cast_to_string_concat(A, B, C)
>>> 'ABNone'
So, we're done, right? Is there any plausible way that this might not work? Hmm... probably not! But I did say this was a three-part answer, so onward! ;)
For the sake of completeness, let's imagine our program operates in a space where every possible scenario is indeed possible. (This may not be a warranted assumption, but I imagine that one could derive the value of _sentinel with enough information about the computer's architecture and the implementation of the choice of the object. So, if you are willing, let us assume that is indeed possible, and let's imagine we decide to test that hypothesis referencing _sentinel as defined above.
_sentinel = object()
def cast_to_string_concat(a, b, c=_sentinel):
c = a if c == _sentinel else c
return str(a) + str(b) + str(c)
A='A'
B='B'
S = _sentinel
cast_to_string_append(A,B,S)
>>> 'ABA'
Wait a minute! I entered three arguments, so I should see the string concatenation of the three of them together!
*queue entering the land of unforeseen consequences*
I mean, not actually. A response of: "That's negligible edge case territory!!" or its ilk is perfectly warranted.
And that sentiment is right! For this case (and probably most cases) this is really not worth worrying about!
But if it is worth worrying about, or if you just want the mathematical satisfaction of eliminating all edge cases you're aware of ... onward!
Exercise left to reader:
Deviating from this technique, you can directly assert c=object(), however, in honesty, I haven't gotten that way to work for me. My investigation shows c == object() is False, and str(c) == str(object()) is also False, and that's why I'm using the implementation from Martin Pieters.
Okay, after that long exercise, we're back!
Recall the goal is to write a function that could potentially have n inputs, and only when one variable is not provided - then you will copy another variable in position i.
Instead of defining the variable by default, what if we change the approach to allow an arbitrary number of variables?
So if you're looking for a solution that does not compromise on potential inputs, where a valid input could be either None, object(), or _sentinel ... then (and only then), at this point, I'm thinking my solution will be helpful. The inspiration for the technique came from the second part of Jon Clements' answer.
Answer 3:
My solution to this problem is to change the naming of this function, and wrap this function with a a function of the previous naming convention, but instead of using variables, we use *args. You then define the original function within the local scope (with the new name), and only allow the few possibilities you desire.
In steps:
Rename function to something similar
Remove the default setup for your optional parameter
Begin to create a new function just above and tab the original function in.
def cast_to_string_concat(*args):
Determine the the arity of your function - (I found that word in my search... that is the number of the parameters passed into a given function)
Utilize a case statement inside that determines if you entered a valid number of variables, and adjust accordingly!
def cast_to_string_append(*args):
def string_append(a, b, c):
# this is the original function, it is only called within the wrapper
return str(a) + str(b) + str(c)
if len(args) == 2:
# if two arguments, then set the third to be the first
return string_append(*args, args[0])
elif len(args) == 3:
# if three arguments, then call the function as written
return string_append(*args)
else:
raise Exception(f'Function: cast_to_string_append() accepts two or three arguments, and you entered {len(args)}.')
# instantiation
A='A'
B='B'
C='C'
D='D'
_sentinel = object()
S = _sentinel
N = None
""" Answer 3 Testing """
# two variables
cast_to_string_append(A,B)
>>> 'ABA'
# three variables
cast_to_string_append(A,B,C)
>>> 'ABC'
# three variables, one is _sentinel
cast_to_string_append(A,B,S)
>>>'AB<object object at 0x10c56f560>'
# three variables, one is None
cast_to_string_append(A,B,N)
>>>'ABNone'
# one variable
cast_to_string_append(A)
>>>Traceback (most recent call last):
>>> File "<input>", line 1, in <module>
>>> File "<input>", line 13, in cast_to_string_append
>>>Exception: Function: cast_to_string_append() accepts two or three arguments, and you entered 1.
# four variables
cast_to_string_append(A,B,C,D)
>>>Traceback (most recent call last):
>>> File "<input>", line 1, in <module>
>>> File "<input>", line 13, in cast_to_string_append
>>>Exception: Function: cast_to_string_append() accepts two or three arguments, and you entered 4.
# ten variables
cast_to_string_append(0,1,2,3,4,5,6,7,8,9)
>>>Traceback (most recent call last):
>>> File "<input>", line 1, in <module>
>>> File "<input>", line 13, in cast_to_string_append
>>>Exception: Function: cast_to_string_append() accepts two or three arguments, and you entered 10.
# no variables
cast_to_string_append()
>>>Traceback (most recent call last):
>>> File "<input>", line 1, in <module>
>>> File "<input>", line 13, in cast_to_string_append
>>>Exception: Function: cast_to_string_append() accepts two or three arguments, and you entered 0.
""" End Answer 3 Testing """
So, in summary:
Answer 1 - the simplest answer, and works for most cases.
def cast_to_string_concat(a, b, c=None):
c = a if c is None else c
return str(a) + str(b) + str(c)
Answer 2 - use if None does not actually signify an empty parameter by switching to object() , through _sentinel .
_sentinel = object()
def cast_to_string_concat(a, b, c=_sentinel):
c = a if c == _sentinel else c
return str(a) + str(b) + str(c)
Answer 3 seeks out a general solution utilizing a wrapper function with arbitrary arity using *args, and handles the acceptable cases inside:
def cast_to_string_append(*args):
def string_append(a, b, c):
# this is the original function, it is only called within the wrapper
return str(a) + str(b) + str(c)
if len(args) == 2:
# if two arguments, then set the third to be the first
return string_append(*args, args[0])
elif len(args) == 3:
# if three arguments, then call the function as written
return string_append(*args)
else:
raise Exception(f'Function: cast_to_string_append() accepts two or three arguments, and you entered {len(args)}.')
Use what works for you! But for me, I'll be using Option 3 ;)
If I have to wrap an existing method, let us say wrapee() from a new method, say wrapper(), and the wrapee() provides default values for some arguments, how do I preserve its semantics without introducing unnecessary dependencies and maintenance? Let us say, the goal is to be able to use wrapper() in place of wrapee() without having to change the client code. E.g., if wrapee() is defined as:
def wrapee(param1, param2="Some Value"):
# Do something
Then, one way to define wrapper() is:
def wrapper(param1, param2="Some Value"):
# Do something
wrapee(param1, param2)
# Do something else.
However, wrapper() has to make assumptions on the default value for param2 which I don't like. If I have the control on wrapee(), I would define it like this:
def wrapee(param1, param2=None):
param2 = param2 or "Some Value"
# Do something
Then, wrapper() would change to:
def wrapper(param1, param2=None):
# Do something
wrapee(param1, param2)
# Do something else.
If I don't have control on how wrapee() is defined, how best to define wrapper()? One option that comes into mind is to use to create a dict with non-None arguments and pass it as dictionary arguments, but it seems unnecessarily tedious.
Update:
The solution is to use both the list and dictionary arguments like this:
def wrapper(param1, *args, **argv):
# Do something
wrapee(param1, *args, **argv)
# Do something else.
All the following calls are then valid:
wrapper('test1')
wrapper('test1', 'test2')
wrapper('test1', param2='test2')
wrapper(param2='test2', param1='test1')
Check out argument lists in the Python docs.
>>> def wrapper(param1, *stuff, **kargs):
... print(param1)
... print(stuff)
... print(args)
...
>>> wrapper(3, 4, 5, foo=2)
3
(4, 5)
{'foo': 2}
Then to pass the args along:
wrapee(param1, *stuff, **kargs)
The *stuff is a variable number of non-named arguments, and the **kargs is a variable number of named arguments.
I'd hardly say that it isn't tedious, but the only approach that I can think of is to introspect the function that you are wrapping to determine if any of its parameters have default values. You can get the list of parameters and then determine which one is the first that has default values:
from inspect import getargspec
method_signature = getargspec(method)
param_names = method_signature[0]
default_values = method_signature[3]
params = []
# If any of method's parameters has default values, we need
# to know the index of the first one that does.
param_with_default_loc = -1
if default_values is not None and len(default_values) > 0:
param_slice_index = len(default_values) * -1
param_with_default = param_names[param_slice_index:][0]
param_with_default_loc = param_names.index(param_with_default)
At that point, you can iterate over param_names, copying into the dict that is passed to wrappee. Once your index >= param_with_default_loc, you can obtain the default values by looking in the default_values list with an index of your index - param_with_default_loc.
Does that make any sesne?
Of course, to make this generic, you would to define it as a wrapper function, adding yet another layer of wrapping.
def wrapper(param1, param2=None):
if param2:
wrapee(param1, param2)
else:
wrapee(param1)
is this what you want?
#!/usr/bin/python
from functools import wraps
def my_decorator(f):
#wraps(f)
def wrapper(*args, **kwds):
print 'Calling decorated function'
return f(*args, **kwds)
return wrapper
def f1(x, y):
print x, y
def f2(x, y="ok"):
print x, y
my_decorator(f1)(1,2)
my_decorator(f2)(1,2)
my_decorator(f2)(1)
adapted from http://koala/doc/python2.6-doc/html/library/functools.html#module-functools