Best way to call default keyword arguments - python

I am hoping to get a good way to make passing arguments to any function easy. I am making a huge amount of functions with a ton of variables taken for each one. Each function calls a lot of functions underneath them, which use some of the same parameters.
I have decided that I can create a structure that will contain default parameters for these functions and then the user may set them to be constant at a value they decide or allowed to vary as a fitting procedure is performed - however, a default value is given up-front for each parameter.
I have been thinking of the best way to do this with the functions which would be for any function f() called to take any amount of relevant parameters; so f() will return the same as f(default_param_structure), as the defaults are assumed. In addition, calling f(arg1=1, arg31='a') will replace the relevant parameters.
Here is an example I am trying to work out:
import pandas as pd
import numpy as np
default_a = 1
default_a_min = 0
default_a_max = 2
default_b = 2
default_b_min = 1
default_b_max = 3
def default_param_struct():
a = np.array([default_a, default_a_min, default_a_max])
b = np.array([default_b, default_b_min, default_b_max])
d = {'a': a, 'b': b}
return pd.DataFrame(d, index=['val', 'min','max'])
def f(a=default_a, b=default_b, *args, **kwargs):
return kwargs
default_param_df = default_param_struct()
print default_param_df
def_param_dict = default_param_df.loc['val'].to_dict()
print def_param_dict
# This should print {'a': 3, 'b': 2} (i.e. a is passed and default_b is given automatically)
print f({'a':3})
# This should print {'a': 1, 'b': 2} (as this is the default parameter structure calculated in the function)
print f(def_param_dict)
# This should print {'a': 1, 'b': 2} (default parameter structure is assumed)
print f()
# This should print {'a': 4, 'b': 2} (i.e. a is passed and default_b is given automatically)
print f(a=4)
# This should print {'a': 3, 'b': 5} as they are both passed
print f(a=3, b=5)
But the output is:
a b
val 1 2
min 0 1
max 2 3
{'a': 1, 'b': 2}
{}
{}
{}
{}
So none of the arguments are making it in. Does anyone know how to solve this? Is there a more elegant solution?

You write
I am making a huge amount of functions with a ton of variables taken for each one.
There are technical solutions for this, but this is usually an indication of some design flaw.
E.g., if you have functions calling each other, passing the same huge number of arguments over and over, then perhaps they should be methods of a class, and most of the arguments should be members. This will both increase encapsulation as well as decrease the number of arguments passed (the arguments are implicit in self).
Otherwise, you might consider making your parameters themselves an OO hierarchy. Perhaps you might need some class describing parameters; perhaps it needs to be subclassed, etc.
IMHO, you shouldn't be solving this with technical tricks.

You are using passing a and b which are the first args a=default_a, b=default_bso you are passing no kwargs, unless you pass different names you are not going to see any output for kwargs:
print f(f=3, j=5)
{'j': 5, 'f': 3}
Check if a and b are passed as kwargs if you want default values, setting the value to some default if the values are not passed in:
def f(*args, **kwargs):
a,b = kwargs.get("a","default_value"),kwargs.get("b","default_value")
You should add a docstring explaining the usage.

Related

passing dynamically all dictionary <key, value> as function arguments

i got a given dictionary - for instance:
x = {'a': 1,'b': 2, 'c': 3}
what i would like to do is sending all keys and values to some function.
for instance:
func1(a=1,b=2,c=3)
(for other dictionary y = {'z': 8,'x': 9, 'w': 11,'p': 88}
the function call will be:
func1(z=8,x=9,w=11,p=88))
is it possible?
Thank you.
This is a built in feature of python, consider the following:
x = {'a': 1,'b': 2, 'c': 3}
func1(**x)
is the same as:
func1(a=1, b=2, c=3)
I recommend you read the documentation on defining fuctions
These exemples might be useful. However it really depends on what's the final function
How to pass dictionary items as function arguments in python?
https://www.geeksforgeeks.org/python-passing-dictionary-as-arguments-to-function/
How to pass dictionary as an argument of function and how to access them in the function

How can I give an argument without knowing its name to a multiparameter function? Python

Hi I have a multiparameter function where only one parameter is missing in a kwargs and I want to be able to input the missing parameter without knowing what parameter it is:
def function(a,b,c): #multiparameter function
print('a=',a)
print('b=',b)
print('c=',c)
kwargs={'a':1,'b':2} #only the value for c is missing
If I run
function(3,**kwargs) it interprets that a=3 but I want it to interpret that c=3
Alternatively I could find out the name of the missing variable but I cannot manage to feed it correctly to the function. I tried to do:
variable='c'
function(variable=3,**kwargs)
But it returns the error function() got an unexpected keyword argument 'variable'
If you can't modify the definition of function, you can wrap it with functools.partial.
from functools import partial
def function(a,b,c): #multiparameter function
print('a=',a)
print('b=',b)
print('c=',c)
function_caller = partial(function, 1, 2, 3)
Now, any keyword arguments to function_caller will override the values specified in the call to partial:
function_caller() # function(1,2,3)
function_caller(a=5, b=4) # function(5, 4, 3)
kwargs = {'a': 5, 'b': 4}
function_caller(**kwargs) # Same as previous call
If you have variable that contains the name of a parameter, you can easily add that to kwargs, though you can't use it directly as a keyword argument.
variable = 'c'
kwargs = {'a': 5, 'b': 4, variable: 3} # {'a': 5, 'b': 4, 'c': 3}
function_caller, though, does not accept any positional arguments.
First, you should probably read about *args and **kwargs
You can force you parameters to be keyword parameters with * and each of your keyword param can have a default sentinel value to help you spot it.
def function(*,a = None,b = None,c = None)
print('a=',a)
print('b=',b)
print('c=',c)
If you want to input something on the fly you could do:
def function(input, *,a = None,b = None,c = None)
print('a=',a or input)
print('b=',b or input)
print('c=',c or input)
If you can take the parameter as a variable, then this request:
kwargs={'a':1,'b':2} #only the value for c is missing
variable='c'
function(variable=3,**kwargs)
Can work if you just add the variable to the kwargs dictionary:
kwargs[variable] = 3
function(**kwargs)

how to make `dict(a_instance_of_a_subclass_of_dict)` become a regular dict in Python 2.7?

I have a subclass of dict, but when I am writing it into database, I want to convert it back to a regular dict first. I wanted to use dict(a_instance_of_a_subclass_of_dict) which looks like casting but I need to decide only certain keys are exported to the regular dict.
I don't know what special method of a mapping is called when you write dict(mapping) so I did this experiment:
class Mydict(dict):
def __getattribute__(self, what):
print 'getting attribute:', what
m = Mydict(x = 2, y = 3, z = 4)
print '--------- mark ---------'
print dict(m)
It prints:
--------- mark ---------
getting attribute: keys
{'y': 3, 'x': 2, 'z': 4}
It looks like dict(mapping) will call keys method of mapping. (Actually something weird happens here. __getattribute__ returns None here but dict simply didn't rely on the return value and still gets the correct contents. Let's forget about this for now.)
Then I rewrote another subclass of dict like this:
class Mydict2(dict):
def keys(self):
print 'here keys'
return ['x', 'y']
m2 = Mydict2(x = 2, y = 3, z = 4)
print '--------- mark2 ---------'
print dict(m2)
Output is this:
--------- mark2 ---------
{'y': 3, 'x': 2, 'z': 4}
It didn't call keys method. Weird again!
Can somebody explain this behavior? Thanks in advance!
cpython checks the presence of keys to decide if the argument is a dictionary-alike object:
if (PyObject_HasAttrString(arg, "keys"))
result = PyDict_Merge(self, arg, 1);
http://hg.python.org/cpython/file/2.7/Objects/dictobject.c#l1435
However, on a later stage, if it turns out that the argument is exactly a dict (or its subclass), it doesn't call keys, but rather accesses the internal hash table directly. keys is only called for dict-alikes, for example, this works as expected:
class Mydict2(UserDict):
def keys(self):
print 'here keys'
return ['x', 'y']
My advice is to avoid fiddling with the system stuff and to add an explicit method like:
class Mydict2(dict):
def export(self):
return {k:self[k] for k in ['x', 'y']}
and call it when you're about to serialize your object for writing into the db.

pack multiple variables of different datatypes in list/array python

I have multiple variables that I need to pack as one and hold it sequentially like in a array or list. This needs to be done in Python and I am still at Python infancy.
E.g. in Python:
a = Tom
b = 100
c = 3.14
d = {'x':1, 'y':2, 'z':3}
All the above in one sequential data structure. I can probably try and also a similar implementation I would have done in C++ just for the sake of clarity.
struct
{
string a;
int b;
float c;
map <char,int> d;// just as an example for dictionary in python
} node;
vector <node> v; // looking for something like this which can be iterable
If some one can give me a similar implementation for storing, iterating and modifying the contents would be really helpful. Any pointers in the right direction is also good with me.
Thanks
You can either use a dictionary like Michael proposes (but then you need to access the contents of v with v['a'], which is a little cumbersome), or you can use the equivalent of C++'s struct: a named tuple:
import collections
node = collections.namedtuple('node', 'a b c d')
# Tom = ...
v = node(Tom, 100, 3.14, {'x':1, 'y':2, 'z':3})
print node # node(a=…, b=100, c=3.14, d={'x':1, 'y':2, 'z':3})
print node.c # 3.14
print node[2] # 3.14 (works too, but is less meaningful and robust than something like node.last_name)
This is similar to, but simpler than defining your own class: type(v) == node, etc. Note however, as volcano pointed out, that the values stored in a namedtuple cannot be changed (a namedtuple is immutable).
If you indeed need to modify the values inside your records, the best option is a class:
class node(object):
def __init__(self, *arg_list):
for (name, arg) in zip('a b c d'.split(), arg_list):
setattr(self, name, arg)
v = node(1, 20, 300, "Eric")
print v.d # "Eric"
v.d = "Ajay" # Works
The last option, which I do not recommend, is indeed to use a list or a tuple, like ATOzTOA mentions: elements must be accessed in a not-so-legible way: node[3] is less meaningful than node.last_name; also, you cannot easily change the order of the fields, when using a list or a tuple (whereas the order is immaterial if you access a named tuple or custom class attributes).
Multiple node objects are customarily put in a list, the standard Python structure for such a purpose:
all_nodes = [node(…), node(…),…]
or
all_nodes = []
for … in …:
all_nodes.append(node(…))
or
all_nodes = [node(…) for … in …]
etc. The best method depends on how the various node objects are created, but in many cases a list is likely to be the best structure.
Note, however, that if you need to store something akin to an spreadsheet table and need speed and facilities for accessing its columns, you might be better off with NumPy's record arrays, or a package like Pandas.
You could put all the values in a dictionary, and have a list of these dictionaries.
{'a': a, 'b': b, 'c': c, 'd': d}
Otherwise, if this data is something that could be represented by a class, for example a 'Person'; create a class of type Person and create an object of that class with your data:
http://docs.python.org/2/tutorial/classes.html
Just use lists:
a = "Tom"
b = 100
c = 3.14
d = {'x':1, 'y':2, 'z':3}
data = [a, b, c, d]
print data
for item in data:
print item
Output:
['Tom', 100, 3.14, {'y': 2, 'x': 1, 'z': 3}]
Tom
100
3.14
{'y': 2, 'x': 1, 'z': 3}

What is the difference between **kwargs and dict in Python 3.2?

It seems that many aspects of python are just duplicates of functionality. Is there some difference beyond the redundancy I am seeing in kwargs and dict within Python?
There is a difference in argument unpacking (where many people use kwargs) and passing dict as one of the arguments:
Using argument unpacking:
# Prepare function
def test(**kwargs):
return kwargs
# Invoke function
>>> test(a=10, b=20)
{'a':10,'b':20}
Passing a dict as an argument:
# Prepare function
def test(my_dict):
return my_dict
# Invoke function
>>> test(dict(a=10, b=20))
{'a':10,'b':20}
The differences are mostly:
readability (you can simply pass keyword arguments even if they weren't explicitly defined),
flexibility (you can support some keyword arguments explicitly and the rest using **kwargs),
argument unpacking helps you avoid unexpected changes to the object "containing" the arguments (which is less important, as Python in general assumes developers know what they are doing, which is a different topic),
It is right that in most cases you can just interchange dicts and **kwargs.
For example:
my_dict = {'a': 5, 'b': 6}
def printer1(adict):
return adict
def printer2(**kwargs):
return kwargs
#evaluate:
>>> printer1(my_dict)
{'a': 5, 'b': 6}
>>> printer2(**my_dict)
{'a': 5, 'b': 6}
However with kwargs you have more flexibility if you combine it with other arguments:
def printer3(a, b=0, **kwargs):
return a,b,kwargs
#evaluate
>>> printer3(**my_dict)
(5, 6, {})

Categories

Resources