I get some strange behaviour of colletions.defaultdict:
import collections
c1 = collections.defaultdict(str)
c1['new'] # Works!
c2 = collections.defaultdict(default_factory=str)
c2['new'] # Raises KeyError...
Why raises c2 a KeyError?
Sometimes I like naming parameter because I think it increases readability.
First I thought maybe python does not allow me to pass the parameter by naming it and puts my default_factory parameter to the kwargs, so I checked:
def func(first, **kwargs):
print(first)
print(kwargs)
func(first='one', second='two')
This outputs:
one
{'second': 'two'}
So this is not the case.
The default_factory parameter of the defaultdict constructor is positional-only and doesn't really have a name. If you try to pass it by name, you are just passing a completely unrelated keyword argument. Since keyword arguments to the defaultdict constructor are interpreted as its initial contents, your dict starts out having a single key "default_factory" whose value is the str type object.
To understand how this works, imagine a function like this:
def func(*args, **kwds):
(default_factory,) = args
for k, v in kwds.items():
print(k, v) # do something with keys and values
If the documentation of this function were to name the positional argument default_factory, that might be a correct description of its meaning, but it would be misleading if it implied that one could pass it as a keyword argument.
Some built-in functions are like that because it is very easy to define positional-only arguments in CPython C code. With defaultdict, it is by design, to allow literally any string key to be used as the part of initial content, without having an exception for a key that happens to be named default_factory.
Related
For this function
def eat_dog(name, should_digest=True):
print "ate dog named %s. Digested, too? %" % (name, str(should_digest))
I want to, external to the function, read its arguments and any default values attached. So for this specific example, I want to know that name has no default value (i.e. that it is a required argument) and that True is the default value for should_digest.
I'm aware of inspect.getargspec(), which does give me information about arguments and default values, but I see no connection between the two:
ArgSpec(args=['name', 'should_digest'], varargs=None, keywords=None, defaults=(True,))
From this output how can I tell that True (in the defaults tuple) is the default value for should_digest?
Additionally, I'm aware of the "ask for forgiveness" model of approaching a problem, but unfortunately output from that error won't tell me the name of the missing argument:
>>> eat_dog()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: eat_dog() takes at least 1 argument (0 given)
To give context (why I want to do this), I'm exposing functions in a module over a JSON API. If the caller omits certain function arguments, I want to return a specific error that names the specific function argument that was omitted. If a client omits an argument, but there's a default provided in the function signature, I want to use that default.
Python3.x
In a python3.x world, you should probably use a Signature object:
import inspect
def get_default_args(func):
signature = inspect.signature(func)
return {
k: v.default
for k, v in signature.parameters.items()
if v.default is not inspect.Parameter.empty
}
Python2.x (old answer)
The args/defaults can be combined as:
import inspect
a = inspect.getargspec(eat_dog)
zip(a.args[-len(a.defaults):],a.defaults)
Here a.args[-len(a.defaults):] are the arguments with defaults values and obviously a.defaults are the corresponding default values.
You could even pass the output of zip to the dict constructor and create a mapping suitable for keyword unpacking.
looking at the docs, this solution will only work on python2.6 or newer since I assume that inspect.getargspec returns a named tuple. Earlier versions returned a regular tuple, but it would be very easy to modify accordingly. Here's a version which works with older (and newer) versions:
import inspect
def get_default_args(func):
"""
returns a dictionary of arg_name:default_values for the input function
"""
args, varargs, keywords, defaults = inspect.getargspec(func)
return dict(zip(args[-len(defaults):], defaults))
Come to think of it:
return dict(zip(reversed(args), reversed(defaults)))
would also work and may be more intuitive to some people.
Depending on exactly what you need, you might not need the inspect module since you can check the __defaults__ attribute of the function:
>>> eat_dog.__defaults__
(True,)
>>> eat_dog.__code__.co_argcount
2
>>> eat_dog.__code__.co_varnames
('name', 'should_digest')
>>>
>>> eat_dog.__kwdefaults__
>>> eat_dog.__code__.co_kwonlyargcount
0
You can use inspect module with its getargspec function:
inspect.getargspec(func)
Get the names and default values of a Python function’s arguments. A tuple of four things is returned: (args, varargs, keywords, defaults). args is a list of the argument names (it may contain nested lists). varargs and keywords are the names of the * and ** arguments or None. defaults is a tuple of default argument values or None if there are no default arguments; if this tuple has n elements, they correspond to the last n elements listed in args.
See mgilson's answer for exact code on how to retrieve argument names and their default values.
To those looking for a version to grab a specific default parameter with mgilson's answer.
value = signature(my_func).parameters['param_name'].default
Here's a full working version, done in Python 3.8.2
from inspect import signature
def my_func(a, b, c, param_name='apple'):
pass
value = signature(my_func).parameters['param_name'].default
print(value == 'apple') # True
to take care of keyword-only args (and because defaults and kwonlydefaults can be None):
spec = inspect.getfullargspec(func)
defaults = dict(zip(spec.args[::-1], (spec.defaults or ())[::-1]))
defaults.update(spec.kwonlydefaults or {})
You can get this via some of the __dunder__ vars as mentioned by other posts. Putting that into a simple helper function can get you a dictionary of default values.
.__code__.co_varnames: A tuple of all input variables
.__defaults__: A tuple of the default values
It is worth noting that this tuple only incudes the default provided variables which must always be positioned last in the function arguments
You can use these two items to match the last n variables in the .__code__.co_varnames with all the items in the .__defaults__
EDIT Thanks to #griloHBG - Added if statement to prevent exceptions when no defaults are specified.
def my_fn(a, b=2, c='a'):
pass
def get_defaults(fn):
if fn.__defaults__==None:
return {}
return dict(zip(
fn.__code__.co_varnames[-len(fn.__defaults__):],
fn.__defaults__
))
print(get_defaults(my_fn))
Should give:
{'b': 2, 'c': 'a'}
In python, all the arguments with default value come after the arguments without default value. So the mapping should start from the end till you exhaust the default value list. Hence the logic:
dict(zip(reversed(args), reversed(defaults)))
gives the correctly mapped defaults.
I have already found various answers to this question (eg. lambda function acessing outside variable) and all point to the same hack, namely (eg.) lambda n=i : n*2 with i a variable in the external scope of lambda (hoping I'm not misusing the term scope). However, this is not working and given that all answers I found are generally from couple of years ago, I thought that maybe this has been deprecated and only worked with older versions of python. Does anybody have an idea or suggestion on how to solve this?
SORRY, forgot the MWE
from inspect import getargspec
params = ['a','b']
def test(*args):
return args[0]*args[1]
func = lambda p=params : test(p)
I expected the signature of func to be ['a','b'] but if I try
func(3,2)
I get a Type error (TypeError: <lambda>() takes at most 1 argument (2 given) )
and it's true signature (from getargspec(func)[0] ) is ['p']
In my real code the thing is more complicated. Shortly:
def fit(self, **kwargs):
settings = self.synch()
freepars = self.loglike.get_args()
func = lambda p=freeparams : self.loglike(p)
minuit = Minuit(func,**settings)
I need lambda because it's the only way I could think to create inplace a function object depending on a non-hardcoded list of variables (extracted via a method get_params() of the instance self.loglike). So func has to have the correct signature, to match the info inside the dict settings
The inspector gives ['p'] as argument of func, not the list of parameters which should go in loglike. Hope you can easily spot my mistake. Thank you
There's no way to do exactly what you want. The syntax you're trying to use to set the signature of the function you're creating doesn't do what you want. It instead sets a default value for the argument you've defined. Python's function syntax allows you to define a function that accepts an arbitrary number of arguments, but it doesn't let you define a function with argument names in a variable.
What you can do is accept *args (or **kwargs) and then do some processing on the value to match it up with a list of argument names. Here's an example where I turn positional arguments in a specific order into keyword arguments to be passed on to another function:
arg_names = ['a', 'b']
def foo(*args):
if len(args) != len(arg_names):
raise ValueError("wrong number of arguments passed to foo")
args_by_name = dict(zip(arg_names, args))
some_other_function(**args_by_name)
This example isn't terribly useful, but you could do more sophisticated processing on the args_by_name dict (e.g. combining it with another dict), which might be relevant to your actual use case.
def accept(**kwargs):
pass
If I defined accept and I expect it be called by passing a param which is dict. Are the asterisks necessary for all dict params?
What if I do things like:
def accept(dict):
pass
dict = {...}
accept(dict)
Specifically speaking, I would like to implement a update method for a class, which keeps a dict as a container. Just like the dict.update method, it takes a dict as a param and modify the content of the container. In this specific case should I use the kwargs or not?
** in function parameter collects all keyword arguments as a dictionary.
>>> def accept(**kwargs): # isinstance(kwargs, dict) == True
... pass
...
Call using keyword arguments:
>>> accept(a=1, b=2)
Call using ** operator:
>>> d = {'a': 1, 'b': 2}
>>> accept(**d)
>>> accept(d)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: accept() takes exactly 0 arguments (1 given)
See Python tutorial - Keyword argument and Unpacking Argument Lists.
BTW, don't use dict as variable name. It shadows builtin function/type dict.
See f below. The function f has two parameters, a positional one called name and a keyword argument message. They are local variable in the frame of the function call.
When you do f("John", **{"foo": "123", "message": "Hello World"}), the function f will unpack the dictionary into local variable by its key/value pair. In the end you have three local varaibles: name, foo=123, and message=Hello World.
The purpose of **kwargs, double asterisks is for uknown keyword arguments.
Contrast this:
def f(name, message=None):
if message:
return name + message
return name
Here I am telling user if you ever want to call f, you can pass a keyword argument message. This is the only kwarg I will ever accept and expect to receive if there is such one. If you try f("John", foo="Hello world") you will get unexpected keyword argument.
**kwargs is useful if you don't know ahead of time what keyword arguments you want to receive (very common for dispatching to lower-level functions/methods), then you use it.
def f(name, message=None, **kwargs):
if message:
return name + message
return name
In the second example, you can do f("John", **{"foo": "Hello Foo"}) while omitting message. You can also do f("John", **{"foo": "Hello Foo", "message": "Hello Message"}).
Can I ignore it?
As you see yes you can ignore it. Here f("John", **{"foo": "Hello Foo", "message": "Hello Message"}) I still only use message and ignore everything else.
Don't use **kwargs unless you need to be careless about the inputs.
What if my input is a dictionary?
If your function simply takes the dictionary and modifies the dictionary, NOT using individaul key, then just pass the dictionary. There is no reason to make a dictionary item into variables.
But here are two main usages of **kwargs.
Supposed I have a class and I want to create attributes on the fly. I can use setattr to set class attributes from input.
class Foo(object):
def __init__(**kwargs):
for key, value in kwargs.items():
setattr(self, key, value)
If I do Foo(**{"a": 1, "b": 2}) I will get Foo.a and Foo.b at the end.
This is particularly useful when you have to deal with legacy code. However, there is a big security concern. Imagine you own a MongoDB instance and this is a container for writing into a database. Imagine this dict is a request form object from user. The user can shovel ANYTHING and you simply save it in the database like that? That's a security hole. Make sure you validate (use a loop).
The second common usage of kwargs is that you don't know things ahead of times which I have covered (it's actually sort of the first common usage anyway).
If you want to pass a dictionary as input to a function, you can simply do it like this
def my_function1(input_dict):
print input_dict
d = {"Month": 1, "Year": 2}
my_function1(d) # {'Month': 1, 'Year': 2}
This is straight forward. Lets see the **kwargs method. kwargs stands for keyword arguments. So, you need to actually pass the parameters as key-value pairs, like this
def my_function2(**kwargs):
print kwargs
my_function2(Month = 1, Year = 2)
But if you have a dictionary and if you want to pass that as a parameter to my_function2, it can be done with unpacking, like this
my_function2(**d)
I'm trying to figure out how to pass optional arguments from optparse. The problem I'm having is if an optparse option is not specified, it defaults to a None type, but if I pass the None type into a function, it yells at me instead of using the default (Which is understandable and valid).
conn = psycopg2.connect(database=options.db, hostname=options.hostname, port=options.port)
The question is, how do I use the function's defaults for optional arguments but still pass in user inputs if there is an input without having a huge number of if statements.
Define a function remove_none_values that filters a dictionary for none-valued arguments.
def remove_none_values(d):
return dict((k,v) for (k,v) in d.iteritems() if not v is None)
kwargs = {
'database': options.db,
'hostname': options.hostname,
...
}
conn = psycopg2.connect(**remove_none_values(kwargs))
Or, define a function wrapper that removes none values before passing the data on to the original function.
def ignore_none_valued_kwargs(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
newkwargs = dict((k,v) for (k,v) in d.iteritems() if not v is None)
return f(*args, **kwargs)
return wrapper
my_connect = ignore_none_valued_kwargs(psycopg2)
conn = my_connect(database=options.db, hostname=options.hostname, port=options.port)
The opo module of my thebops package (pip install thebops, https://bitbucket.org/therp/thebops) contains an add_optval_option function.
This uses an additional keyword argument empty which specifies the value to use if the option is used without a value. If one of the option strings is found in the commandline, this value is injected into the argument list.
This is still hackish, but at least it is made a simple-to-use function ...
It works well under the following circumstances:
The argument vector does already exist when the option is created. This is usually true.
All programs I found which sport arguments with optional values require the given value to be attached as --option=value or -ovalue rather than --option value or -o value.
Maybe I'll tweak thebops.optparse to support the empty argument as well; but I'd like to have a test suite first to prevent regressions, preferably the original Optik / optparse tests.
This is the code:
from sys import argv
def add_optval_option(pog, *args, **kwargs):
"""
Add an option which can be specified without a value;
in this case, the value (if given) must be contained
in the same argument as seen by the shell,
i.e.:
--option=VALUE, --option will work;
--option VALUE will *not* work
Arguments:
pog -- parser or group
empty -- the value to use when used without a value
Note:
If you specify a short option string as well, the syntax given by the
help will be wrong; -oVALUE will be supported, -o VALUE will not!
Thus it might be wise to create a separate option for the short
option strings (in a "hidden" group which isn't added to the parser after
being populated) and just mention it in the help string.
"""
if 'empty' in kwargs:
empty_val = kwargs.pop('empty')
# in this case it's a good idea to have a <default> value; this can be
# given by another option with the same <dest>, though
for i in range(1, len(argv)):
a = argv[i]
if a == '--':
break
if a in args:
argv.insert(i+1, empty_val)
break
pog.add_option(*args, **kwargs)
Does python have the ability to create dynamic keywords?
For example:
qset.filter(min_price__usd__range=(min_price, max_price))
I want to be able to change the usd part based on a selected currency.
Yes, It does. Use **kwargs in a function definition.
Example:
def f(**kwargs):
print kwargs.keys()
f(a=2, b="b") # -> ['a', 'b']
f(**{'d'+'e': 1}) # -> ['de']
But why do you need that?
If I understand what you're asking correctly,
qset.filter(**{
'min_price_' + selected_currency + '_range' :
(min_price, max_price)})
does what you need.
You can easily do this by declaring your function like this:
def filter(**kwargs):
your function will now be passed a dictionary called kwargs that contains the keywords and values passed to your function. Note that, syntactically, the word kwargs is meaningless; the ** is what causes the dynamic keyword behavior.
You can also do the reverse. If you are calling a function, and you have a dictionary that corresponds to the arguments, you can do
someFunction(**theDictionary)
There is also the lesser used *foo variant, which causes you to receive an array of arguments. This is similar to normal C vararg arrays.
Yes, sort of.
In your filter method you can declare a wildcard variable that collects all the unknown keyword arguments. Your method might look like this:
def filter(self, **kwargs):
for key,value in kwargs:
if key.startswith('min_price__') and key.endswith('__range'):
currency = key.replace('min_price__', '').replace('__range','')
rate = self.current_conversion_rates[currency]
self.setCurrencyRange(value[0]*rate, value[1]*rate)