I'm trying to understand the python data model better and ran into something odd.
def foo(a, b = 2):
return a / b
assert foo(20) == 10.0
# note: for sanity purposes, should also change signature, but not needed for effect
foo.__defaults__ = (10,)
assert foo(20) == 2.0
foo.__defaults__ = ()
foo.__kwdefaults__ = {'b': 10}
foo(20) # raises TypeError: foo() missing 1 required positional argument: 'b'
An error is expected: __kwdefaults__ is for keyword-only arguments, so let's make b a keyword-only argument to try to solve this problem:
from inspect import signature
foo.__signature__ = signature(lambda a, *, b=10: None)
foo(20) # still raises TypeError: foo() missing 1 required positional argument: 'b'
How does the error message relate to what's happening?.
What I find strange is that neither the original function, nor my doctored one required b (it always had a default!). Also, b has never been a positional-only argument.
What is happening here? How can one transform foo to make b be a keyword-only argument with default 10.
If my original function had the signature I "injected" above, all goes well though:
def foo(a, *, b=2): # same as previous `foo`, with signature we want
return a / b
foo.__kwdefaults__ = {'b': 10} # change kwdefault
assert foo(20) == 2.0 # it works!!
Preemptive note: I know of functools wraps and partial, which I could use -- though in my context, I'd rather change the function itself, not a wrapped version. My question is about the behavior I created in the code above: How did it come about?
Purpose of __signature__
Your issue is, that you think that you change a function's signature by setting foo.__signature__. However, this is not what's happening. It is equally useless to set it to foo.signature or foo.any_other_name. You just set a signature object to the respective property of the function, which changes nothing with regards to the function's behaviour.
The only thing that __signature__ does is to change the behaviour of inspect.signature(), since it will return the signature of the function as stored in function.__signature__ iff it is set. I.e. the only thing, that __signature__ changes is the behaviour of inspect.signature(), but not the function itself.
See ekhumoro's comment for the link to the appropriate PEP.
TypeError
As for the type error: In foo() b is not a kwarg-only argument:
def foo(a, b = 2):
return a / b
It is a positional argument with a default value. Hence its default value is stored in foo.__defaults__. When you set foo.__defaults__ = () you erased those defaults. After that, b hence has no longer a default value and needs to be passed explicitly.
Changing signatures
How can one transform foo to make b be a keyword-only argument with default 10.
You cannot change a function's signature during runtime. Period.
Changing default values
You can, however, change b's default value to 10 via
>>> foo.__defaults__ = (10,)
>>> foo(2)
0.2
Since positional arguments with default values cannot be followed by positional arguments without defaults, the tuple __defaults__ is applied to the positional arguments from right to left.
So you can also give a a default value of e.g. 20 via
>>> foo.__defaults__ = (20, 10)
>>> foo()
2.0
I'm creating a wrapper for a function with functools.wraps. My wrapper has the effect of overriding a default parameter (and it doesn't do anything else):
def add(*, a=1, b=2):
"Add numbers"
return a + b
#functools.wraps(add)
def my_add(**kwargs):
kwargs.setdefault('b', 3)
return add(**kwargs)
This my_add definition behaves the same as
#functools.wraps(add)
def my_add(*, a=1, b=3):
return add(a=a, b=b)
except that I didn't have to manually type out the parameter list.
However, when I run help(my_add), I see the help string for add, which has the wrong function name and the wrong default argument for the parameter b:
add(*, a=1, b=2)
Add numbers
How can I override the function name and the default argument in this help() output?
(Or, is there a different way to define my_add, using for example some magic function my_add = magic(add, func_name='my_add', kwarg_defaults={'b': 3}) that will do what I want?)
Let me try and explain what happens.
When you call the help functions, this is going to request information about your function using the inspect module. Therefore you have to change the function signature, in order to change the default argument.
Now this is not something that is advised, or often preferred, but who cares about that right? The provided solution is considered hacky and probably won't work for all versions of Python. Therefore you might want to reconsider how important the help function is... Any way let's start with some explanation on how it was done, followed by the code and test case.
Copying functions
Now the first thing we will do is copy the entire function, this is because I only want to change the signature of the new function and not the original function. This decouples the new my_add signature (and default values) from the original add function.
See:
How to create a copy of a python function
How can I make a deepcopy of a function in Python?
For ideas of how to do this (I will show my version in a bit).
Copying / updating signature
The next step is to get a copy of the function signature, for that this post was very useful. Except for the part where we have to adjust the signature parameters to match the new keyword default arguments.
For that we have to change the value of a mappingproxy, which we can see when running the debugger on the return value of inspect.signature(g). Now so far this can only be done by changing the private variables (the values with leading underscores _private). Therefore this solution will be considered hacky and is not guaranteed to withstand possible updates. That said, let's see the solution!
Full code
import inspect
import types
import functools
def update_func(f, func_name='', update_kwargs: dict = None):
"""Based on http://stackoverflow.com/a/6528148/190597 (Glenn Maynard)"""
g = types.FunctionType(
code=f.__code__,
globals=f.__globals__.copy(),
name=f.__name__,
argdefs=f.__defaults__,
closure=f.__closure__
)
g = functools.update_wrapper(g, f)
g.__signature__ = inspect.signature(g)
g.__kwdefaults__ = f.__kwdefaults__.copy()
# Adjust your arguments
for key, value in (update_kwargs or {}).items():
g.__kwdefaults__[key] = value
g.__signature__.parameters[key]._default = value
g.__name__ = func_name or g.__name__
return g
def add(*, a=1, b=2):
"Add numbers"
return a + b
my_add = update_func(add, func_name="my_add", update_kwargs=dict(b=3))
Example
if __name__ == '__main__':
a = 2
print("*" * 50, f"\nMy add\n", )
help(my_add)
print("*" * 50, f"\nOriginal add\n", )
help(add)
print("*" * 50, f"\nResults:"
f"\n\tMy add : a = {a}, return = {my_add(a=a)}"
f"\n\tOriginal add: a = {a}, return = {add(a=a)}")
Output
**************************************************
My add
Help on function my_add in module __main__:
my_add(*, a=1, b=3)
Add numbers
**************************************************
Original add
Help on function add in module __main__:
add(*, a=1, b=2)
Add numbers
**************************************************
Results:
My add : a = 2, return = 5
Original add: a = 2, return = 4
Usages
f: is the function that you want to update
func_name: is optionally the new name of the function (if empty, keeps the old name)
update_kwargs: is a dictionary containing the key and value of the default arguments that you want to update.
Notes
The solution is using copy variables to make full copies of dictionaries, such that there is no impact on the original add function.
The _default value is a private variable, and can be changed in future releases of python.
There's a function which takes optional arguments.
def alpha(p1="foo", p2="bar"):
print('{0},{1}'.format(p1, p2))
Let me iterate over what happens when we use that function in different ways:
>>> alpha()
foo,bar
>>> alpha("FOO")
FOO,bar
>>> alpha(p2="BAR")
foo,BAR
>>> alpha(p1="FOO", p2=None)
FOO,None
Now consider the case where I want to call it like alpha("FOO", myp2) and myp2 will either contain a value to be passed, or be None. But even though the function handles p2=None, I want it to use its default value "bar" instead.
Maybe that's worded confusingly, so let me reword that:
If myp2 is None, call alpha("FOO"). Else, call alpha("FOO", myp2).
The distinction is relevant because alpha("FOO", None) has a different result than alpha("FOO").
How can I concisely (but readably) make this distinction?
One possibility would usually be to check for None within alpha, which would be encouraged because that would make the code safer. But assume that alpha is used in other places where it is actually supposed to handle None as it does.
I'd like to handle that on the caller-side.
One possibility is to do a case distinction:
if myp2 is None:
alpha("FOO")
else:
alpha("FOO", myp2)
But that can quickly become much code when there are multiple such arguments. (exponentially, 2^n)
Another possibility is to simply do alpha("FOO", myp2 or "bar"), but that requires us to know the default value. Usually, I'd probably go with this approach, but I might later change the default values for alpha and this call would then need to be updated manually in order to still call it with the (new) default value.
I am using python 3.4 but it would be best if your answers can provide a good way that works in any python version.
The question is technically finished here, but I reword some requirement again, since the first answer did gloss over that:
I want the behaviour of alpha with its default values "foo", "bar" preserved in general, so it is (probably) not an option to change alpha itself.
In yet again other words, assume that alpha is being used somewhere else as alpha("FOO", None) where the output FOO,None is expected behaviour.
Pass the arguments as kwargs from a dictionary, from which you filter out the None values:
kwargs = dict(p1='FOO', p2=None)
alpha(**{k: v for k, v in kwargs.items() if v is not None})
But assume that alpha is used in other places where it is actually supposed to handle None as it does.
To respond to this concern, I have been known to have a None-like value which isn't actually None for this exact purpose.
_novalue = object()
def alpha(p1=_novalue, p2=_novalue):
if p1 is _novalue:
p1 = "foo"
if p2 is _novalue:
p2 = "bar"
print('{0},{1}'.format(p1, p2))
Now the arguments are still optional, so you can neglect to pass either of them. And the function handles None correctly. If you ever want to explicitly not pass an argument, you can pass _novalue.
>>> alpha(p1="FOO", p2=None)
FOO,None
>>> alpha(p1="FOO")
FOO,bar
>>> alpha(p1="FOO", p2=_novalue)
FOO,bar
and since _novalue is a special made-up value created for this express purpose, anyone who passes _novalue is certainly intending the "default argument" behavior, as opposed to someone who passes None who might intend that the value be interpreted as literal None.
although ** is definitely a language feature, it's surely not created for solving this particular problem. Your suggestion works, so does mine. Which one works better depends on the rest of the OP's code. However, there is still no way to write f(x or dont_pass_it_at_all)
- blue_note
Thanks to your great answers, I thought I'd try to do just that:
# gen.py
def callWithNonNoneArgs(f, *args, **kwargs):
kwargsNotNone = {k: v for k, v in kwargs.items() if v is not None}
return f(*args, **kwargsNotNone)
# python interpreter
>>> import gen
>>> def alpha(p1="foo", p2="bar"):
... print('{0},{1}'.format(p1,p2))
...
>>> gen.callWithNonNoneArgs(alpha, p1="FOO", p2=None)
FOO,bar
>>> def beta(ree, p1="foo", p2="bar"):
... print('{0},{1},{2}'.format(ree,p1,p2))
...
>>> beta('hello', p2="world")
hello,foo,world
>>> beta('hello', p2=None)
hello,foo,None
>>> gen.callWithNonNoneArgs(beta, 'hello', p2=None)
hello,foo,bar
This is probably not perfect, but it seems to work: It's a function that you can call with another function and it's arguments, and it applies deceze's answer to filter out the arguments that are None.
You could inspect the default values via alpha.__defaults__ and then use them instead of None. That way you circumvent the hard-coding of default values:
>>> args = [None]
>>> alpha('FOO', *[x if x is not None else y for x, y in zip(args, alpha.__defaults__[1:])])
I had the same problem when calling some Swagger generated client code, which I couldn't modify, where None could end up in the query string if I didn't clean up the arguments before calling the generated methods. I ended up creating a simple helper function:
def defined_kwargs(**kwargs):
return {k: v for k, v in kwargs.items() if v is not None}
>>> alpha(**defined_kwargs(p1="FOO", p2=None))
FOO,bar
It keeps things quite readable for more complex invocations:
def beta(a, b, p1="foo", p2="bar"):
print('{0},{1},{2},{3}'.format(a, b, p1, p2,))
p1_value = "foo"
p2_value = None
>>> beta("hello",
"world",
**defined_kwargs(
p1=p1_value,
p2=p2_value))
hello,world,FOO,bar
I'm surprised nobody brought this up
def f(p1="foo", p2=None):
p2 = "bar" if p2 is None else p2
print(p1+p2)
You assign None to p2 as standart (or don't, but this way you have the true standart at one point in your code) and use an inline if. Imo the most pythonic answer. Another thing that comes to mind is using a wrapper, but that would be way less readable.
EDIT:
What I'd probably do is use a dummy as standart value and check for that. So something like this:
class dummy():
pass
def alpha(p1="foo", p2=dummy()):
if isinstance(p2, dummy):
p2 = "bar"
print("{0},{1}".format(p1, p2))
alpha()
alpha("a","b")
alpha(p2=None)
produces:
foo,bar
a,b
foo,None
Unfortunately, there's no way to do what you want. Even widely adopted python libraries/frameworks use your first approach. It's an extra line of code, but it is quite readable.
Do not use the alpha("FOO", myp2 or "bar") approach, because, as you mention yourself, it creates a terrible kind of coupling, since it requires the caller to know details about the function.
Regarding work-arounds: you could make a decorator for you function (using the inspect module), which checks the arguments passed to it. If one of them is None, it replaces the value with its own default value.
Not a direct answer, but I think this is worth considering:
See if you can break your function into several functions, neither of which has any default arguments. Factor any shared functionality out to a function you designate as internal.
def alpha():
_omega('foo', 'bar')
def beta(p1):
_omega(p1, 'bar')
def _omega(p1, p2):
print('{0},{1}'.format(p1, p2))
This works well when the extra arguments trigger "extra" functionality, as it may allow you to give the functions more descriptive names.
Functions with boolean arguments with True and/or False defaults frequently benefit from this type of approach.
Another possibility is to simply do alpha("FOO", myp2 or "bar"), but that requires us to know the default value. Usually, I'd probably go with this approach, but I might later change the default values for alpha and this call would then need to be updated manually in order to still call it with the (new) default value.
Just create a constant:
P2_DEFAULT = "bar"
def alpha(p1="foo", p2=P2_DEFAULT):
print('{0},{1}'.format(p1, p2))
and call the function:
alpha("FOO", myp2 or P2_DEFAULT)
If default values for alpha will be changed, we have to change only one constant.
Be careful with logical or for some cases, see https://stackoverflow.com/a/4978745/3605259
One more (better) use case
For example, we have some config (dictionary). But some values are not present:
config = {'name': 'Johnny', 'age': '33'}
work_type = config.get('work_type', P2_DEFAULT)
alpha("FOO", work_type)
So we use method get(key, default_value) of dict, which will return default_value if our config (dict) does not contain such key.
As I cannot comment on answers yet, I'd like to add that the first solution (unpacking the kwargs) would fit nicely in a decorator as follows:
def remove_none_from_kwargs(func):
#wraps(func)
def wrapper(self, *args, **kwargs):
func(self,*args, **{k: v for k, v in kwargs.items() if v is not None})
return wrapper
I am a c++ guy, learning the lambda function in python and wanna know it inside out. did some seraches before posting here. anyway, this piece of code came up to me.
<1> i dont quite understand the purpose of lambda function here. r we trying to get a function template? If so, why dont we just set up 2 parameters in the function input?
<2> also, make_incrementor(42), at this moment is equivalent to return x+42, and x is the 0,1 in f(0) and f(1)?
<3> for f(0), does it not have the same effect as >>>f = make_incrementor(42)? for f(0), what are the values for x and n respectively?
any commments are welcome! thanks.
>>> def make_incrementor(n):
... return lambda x: x + n
...
>>> f = make_incrementor(42)
>>> f(0)
42
>>> f(1)
43
Yes, this is similar to a C++ int template. However, instead of at compile time (yes, Python (at least for CPython) is "compiled"), the function is created at run time. Why the lambda is used in this specific case is unclear, probably only for demonstration that functions can be returned from other functions rather than practical use. Sometimes, however, statements like this may be necessary if you need a function taking a specified number of arguments (e.g. for map, the function must take the same number of arguments as the number of iterables given to map) but the behaviour of the function should depend on other arguments.
make_incrementor returns a function that adds n (here, 42) to any x passed to that function. In your case the x values you tried are 0 and `1``
f = make_incrementor(42) sets f to a function that returns x + 42. f(0), however, returns 0 + 42, which is 42 - the returned types and values are both different, so the different expressions don't have the same effect.
The purpose is to show a toy lambda return. It lets you create a function with data baked in. I have used this less trivial example of a similar use.
def startsWithFunc(testString):
return lambda x: x.find(testString) == 0
Then when I am parsing, I create some functions:
startsDesctription = startsWithFunc("!Sample_description")
startMatrix = startsWithFunc("!series_matrix_table_begin")
Then in code I use:
while line:
#.... other stuff
if startsDesctription(line):
#do description work
if startMatrix(line):
#do matrix start work
#other stuff ... increment line ... etc
Still perhaps trival, but it shows creating general funcitons with data baked it.
So I wrote this function from a book I am reading, and this is how it starts:
def cheese_and_crackers(cheese_count, boxes_of_crackers):
print "You have %d cheeses!" % cheese_count
print "You have %d boxes of crackers!" % boxes_of_crackers
print "Man that's enough for a party!"
print "Get a blanket.\n"
ok, makes sense. and then, this is when this function is run where I got a little confused and wanted to confirm something:
print "OR, we can use variables from our script:"
amount_of_cheese = 10
amount_of_crackers = 50
cheese_and_crackers(amount_of_cheese, amount_of_crackers)
the thing that confused me here is that the amount_of_cheese and amount_of_crackers is changing the variables (verbage? not sure if i am saying the right lingo) from cheese_count and boxes_of_crackers repectively from the first inital variable labels in the function.
so my question is, when you are using a different variable from the one that is used in the initial function you wrote, why would you change the name of the AFTER you wrote out the new variable names? how would the program know what the new variables are if it is shown after it?
i thought python reads programs top to bottom, or does it do it bottom to top?
does that make sense? i'm not sure how to explain it. thank you for any help. :)
(python 2.7)
I think you are just a bit confused on the naming rules for parameter passing.
Consider:
def foo(a, b):
print a
print b
and you can call foo as follows:
x = 1
y = 2
foo(x, y)
and you'll see:
1
2
The variable names of the arguments (a, b) in the function signature (1st line of function definition) do not have to agree with the actual variable names used when you invoke the function.
Think of it as this, when you call:
foo(x, y)
It's saying: "invoke the function foo; pass x in as a, pass y in as b". Furthermore, the arguments here are passed in as copies, so if you were to modify them inside the function, it won't change the values outside of the function, from where it was invoked. Consider the following:
def bar(a, b):
a = a + 1
b = b + 2
print a
x = 0
y = 0
bar(x, y)
print x
print y
and you'll see:
1
2
0
0
The script runs from top to bottom. The function executes when you call it, not when you define it.
I'd suggest trying to understand concepts like variables and function argument passing first.
def change(variable):
print variable
var1 = 1
change(var1)
In the above example, var1 is a variable in the main thread of execution.
When you call a function like change(), the scope changes. Variables you declared outside that function cease to exist so long as you're still in the function's scope. However, if you pass it an argument, such as var1, then you can use that value inside your function, by the name you give it in the function declaration: in this case, variable. But it is entirely separate from var! The value is the same, but it is a different variable!
Your question relates to function parameter transfer.
There are two types of parameter transfer into a function:
By value ------- value changed in function domain but not global domain
By reference ------- value changed in global domain
In python, non-atomic types are transferred by reference; atomic types (like string, integer) is transferred by value.
For example,
Case 1:
x = 20
def foo(x):
x+=10
foo()
print x // 20, rather than 30
Case 2:
d = {}
def foo(x): x['key']=20
foo(d)
print d // {'key': 20}