Is it possible to pass the same optional arguments to multiple functions? - python

I want to ask if there is a way to prevent unnecessary duplicate of code when passing the same arguments into a function's optional arguments.
Hopefully the following example provides a good idea of what I am trying to do:
def f(arg1):
def g(optional_1=0, optional_2=0, optional_3=0):
return arg1+optional_1+optional_2+optional_3
return g
b, c = 2, 3
f1 = f(1)
f2 = f(2)
calc_f1 = f1(optional_2=b, optional_3=c)
calc_f2 = f2(optional_2=b, optional_3=c)
As you can see, f1 and f2 only differ in the arg1 passed into f and afterwards I call them with the same variables for the same optional arguments.
It is fine when the code is short, but when I have over 10 optional arguments, it becomes unnecessarily long and redundant.
Is it possible to do something like
optional_variable_pair = #some way to combine them
calc_f1 = f1(optional_variable_pair)
calc_f2 = f2(optional_variable_pair)
so I get a more succinct and easy to read code?

Any function with multiple optional arguments is a bit smelly because:
you get so many argument combinations that it requires a large amount of testing.
because of all the options the function has to have alot of conditionals and routes which increase its cyclomatic complexity.
You can apply a refactoring to extract the whole argument list into an Object and have the function work on that object. This works really well if you can find a unifying name that describes your argument list and fits whatever metaphor you are using around the function. You can even invert the call so that the function becomes a method of the Object, so you get some encapsulation.

To answer the question you asked, the answer is yes. You can do almost exactly what you want using keyword argument unpacking.
def f(arg1):
def g(optional_1=0, optional_2=0, optional_3=0):
return arg1+optional_1+optional_2+optional_3
return g
optional_variable_pair = {
'optional_2': 2,
'optional_3': 3
}
f1 = f(1)
f2 = f(2)
calc_f1 = f1(**optional_variable_pair)
calc_f2 = f2(**optional_variable_pair)
If I'm reading your intent correctly, though, the essence of your question is wanting to pass new first arguments with the same successive arguments to a function. Depending on your use case, the wrapper function g may be unnecessary.
def f(arg1, *, optional_1=0, optional_2=0, optional_3=0):
return optional_1 + optional_2+optional_3
optional_variable_pair = {
'optional_2': 2,
'optional_3': 3
}
calc_f1 = f(1, **optional_variable_pair)
calc_f2 = f(2, **optional_variable_pair)
Obviously, if the first argument continues incrementing by one, a for loop is in order. Obviously, if you are never using the optional_1 parameter, you do not need to include it. But, moreover, if you find yourself using numbered arguments, there is a good chance you really should be working with tuple unpacking instead of keyword unpacking:
def f(*args):
return sum(args)
optional_variable_pair = (2, 3)
for i in range(1, 3):
calc = f(i, *optional_variable_pair)
# ...do something with calc...
You may also be interested in researching functools.partial, as well, which can take the place of your wrapper function g, and allow this:
import functools
def f(*args):
return sum(args)
f1 = functools.partial(f, 1)
f2 = functools.partial(f, 2)
calc_f1 = f1(2, 3) # = 1 + 2 + 3 = 6
calc_f2 = f2(2, 3) # = 2 + 2 + 3 = 7

You use key-value pairs as function argsuments, for this purpose you can use *args and **kwargs:
optional_variable_pair = {
"optional_1": 1,
"optional_2": 2,
"optional_3": 3,
}
calc_f1 = f1(**optional_variable_pair)
calc_f2 = f2(**optional_variable_pair)

Related

How to package a sequence functions that act on parameter in order in Python

Imagine there are three functions, all them accept and return the same type args.
Normally, we can write it as fun3(fun2(fun1(args)), this can be say that a sequence function act on parameter in order, which likes one variety Higher-order functions "map".
You know in Mathematica, we can write this as fun3#fun2#fun1#args.
Now the question is that can we integrate fun3#fun2#fun1 as another fun without modifying their definition, so fun(args) can replace fun3(fun2(fun1(args)), this looks more elegant and concise.
def merge_steps(*fun_list):
def fun(arg):
result = arg
for f in fun_list:
result = f(result)
return result
return fun
def plus_one(arg):
return arg + 1
def double_it(arg):
return arg ** 2
def power_ten(arg):
return arg ** 10
combine1 = merge_steps(power_ten, plus_one, double_it)
combine2 = merge_steps(plus_one, power_ten, double_it)
combine1(3)
> 3486902500
or use lambda:
steps = [power_ten, plus_one, double_it]
reduce(lambda a, f: f(a), steps, 3)
> 3486902500
I think you can use Function Recursion in python to do this.
def function(args, times):
print(f"{times} Times - {args}")
if times > 0 :
function(args,times - 1)
function("test", 2)
Note: I just add times argument to not generate infinite loop.
I'm not certain I understand your question, but are you talking about function composition along these lines?
# Some single-argument functions to experiment with.
def double(x):
return 2 * x
def reciprocal(x):
return 1 / x
# Returns a new function that will execute multiple single-argument functions in order.
def compose(*funcs):
def g(x):
for f in funcs:
x = f(x)
return x
return g
# Demo.
double_recip_abs = compose(double, reciprocal, abs)
print(double_recip_abs(-2)) # 0.25
print(double_recip_abs(.1)) # 5.0

equivalent to R's `do.call` in python

Is there an equivalent to R's do.call in python?
do.call(what = 'sum', args = list(1:10)) #[1] 55
do.call(what = 'mean', args = list(1:10)) #[1] 5.5
?do.call
# Description
# do.call constructs and executes a function call from a name or a function and a list of arguments to be passed to it.
There is no built-in for this, but it is easy enough to construct an equivalent.
You can look up any object from the built-ins namespace using the __builtin__ (Python 2) or builtins (Python 3) modules then apply arbitrary arguments to that with *args and **kwargs syntax:
try:
# Python 2
import __builtin__ as builtins
except ImportError:
# Python 3
import builtins
def do_call(what, *args, **kwargs):
return getattr(builtins, what)(*args, **kwargs)
do_call('sum', range(1, 11))
Generally speaking, we don't do this in Python. If you must translate strings into function objects, it is generally preferred to build a custom dictionary:
functions = {
'sum': sum,
'mean': lambda v: sum(v) / len(v),
}
then look up functions from that dictionary instead:
functions['sum'](range(1, 11))
This lets you strictly control what names are available to dynamic code, preventing a user from making a nuisance of themselves by calling built-ins for their destructive or disruptive effects.
do.call is pretty much the equivalent of the splat operator in Python:
def mysum(a, b, c):
return sum([a, b, c])
# normal call:
mysum(1, 2, 3)
# with a list of arguments:
mysum(*[1, 2, 3])
Note that I’ve had to define my own sum function since Python’s sum already expects a list as an argument, so your original code would just be
sum(range(1, 11))
R has another peculiarity: do.call internally performs a function lookup of its first argument. This means that it finds the function even if it’s a character string rather than an actual function. The Python equivalent above doesn’t do this — see Martijn’s answer for a solution to this.
Goes similar to previous answer, but why so complicated?
def do_call(what, args=[], kwargs = {}):
return what(*args, **kwargs)
(Which is more elegant than my previously posted definition:)
def do_call(which, args=None, kwargs = None):
if args is None and kwargs is not None:
return which(**kwargs)
elif args is not None and kwargs is None:
return which(*args)
else:
return which(*args, **kwargs)
Python's sum is different than R's sum (1 argument a list expected vs.
arbitraily many arguments expected in R). So we define our own sum (mysum)
which behaves similarly to R's sum. In a similar way we define mymean.
def mysum(*args):
return sum(args)
def mymean(*args):
return sum(args)/len(args)
Now we can recreate your example in Python - as a reasonable 1:1 translation of the R function call.
do_call(what = mymean, args=[1, 2, 3])
## 2.0
do_call(what = mysum, args=[1, 2, 3])
## 6
For functions with argument names, we use a dict for kwargs, where the parameter
names are keys of the dictionary (as strings) and their values the values.
def myfunc(a, b, c):
return a + b + c
do_call(what = myfunc, kwargs={"a": 1, "b": 2, "c": 3})
## 6
# we can even mix named and unnamed parts
do_call(what = myfunc, args = [1, 2], kwargs={"c": 3})
## 6

Calling function with unknown number of parameters

I am trying to create a set of functions in python that will all do a similar operation on a set of inputs. All of the functions have one input parameter fixed and half of them also need a second parameter. For the sake of simplicity, below is a toy example with only two functions.
Now, I want, in my script, to run the appropriate function, depending on what the user input as a number. Here, the user is the random function (so the minimum example works). What I want to do is something like this:
def function_1(*args):
return args[0]
def function_2(*args):
return args[0] * args[1]
x = 10
y = 20
i = random.randint(1,2)
f = function_1 if i==1 else function_2
return_value = f(x,y)
And it works, but it seems messy to me. I would rather have function_1 defined as
def function_1(x):
return x
Another way would be to define
def function_1(x,y):
return x
But that leaves me with a dangling y parameter.
but that will not work as easily. Is my way the "proper" way of solving my problem or does there exist a better way?
There are couple of approaches here, all of them adding more boiler-plate code.
There is also this PEP which may be interesting to you.
But 'pythonic' way of doing it is not as elegant as usual function overloading due to the fact that functions are just class attributes.
So you can either go with function like that:
def foo(*args):
and then count how many args you've got which will be very broad but very flexible as well.
another approach is the default arguments:
def foo(first, second=None, third=None)
less flexible but easier to predict, and then lastly you can also use:
def foo(anything)
and detect the type of anything in your function acting accordingly.
Your monkey-patching example can work too, but it becomes more complex if you use it with class methods, and does make introspection tricky.
EDIT: Also, for your case you may want to keep the functions separate and write single 'dispatcher' function that will call appropriate function for you depending on the arguments, which is probably best solution considering above.
EDIT2: base on your comments I believe that following approach may work for you
def weigh_dispatcher(*args, **kwargs):
#decide which function to call base on args
if 'somethingspecial' in kwargs:
return weight2(*args, **kwargs)
def weight_prep(arg):
#common part here
def weight1(arg1, arg2):
weitht_prep(arg1)
#rest of the func
def weight2(arg1, arg2, arg3):
weitht_prep(arg1)
#rest of the func
alternatively you can move the common part into the dispatcher
You may also have a function with optional second argument:
def function_1(x, y = None):
if y != None:
return x + y
else:
return x
Here's the sample run:
>>> function_1(3)
3
>>> function_1(3, 4)
7
Or even optional multiple arguments! Check this out:
def function_2(x, *args):
return x + sum(args)
And the sample run:
>>> function_2(3)
3
>>> function_2(3, 4)
7
>>> function_2(3, 4, 5, 6, 7)
25
You may here refer to args as to list:
def function_3(x, *args):
if len(args) < 1:
return x
else:
return x + sum(args)
And the sample run:
>>> function_3(1,2,3,4,5)
15

generic function in python - calling a method with unknown number of arguments

i'm new to python and need some help...
I'm implementing a generic search function that accepts an argument "fringe", which can be a data structure of many types.
in the search method I have the line:
fringe.push(item, priority)
the problem is that the push method in different data structures takes different number of arguments(some require priority and some dont). is there an ellegant way to get pass that and make the "push" method take only the number of args it requires out of the argument list sent?
Thanks!
The method to get different number of arguments and still being able of selecting the right one is the use of *args and **keyword_args parameters.
From Mark Lutz's Learning Python book:
* and **, are designed to support functions that take any number of arguments. Both can appear in either the function definition or a
function call, and they have related purposes in the two locations.
* and ** in function definition
If you define a function:
def f1(param1, *argparams, **kwparams):
print 'fixed_params -> ', param1
print 'argparams --> ', argparams
print 'kwparams ---->,', kwparams
you can call it this way:
f1('a', 'b', 'c', 'd', kw1='keyw1', kw2='keyw2')
Then you get:
fixed_params -> a
argparams --> ('b', 'c', 'd')
kwparams ---->, {'kw1': 'keyw1', 'kw2': 'keyw2'}
So that you can send/receive any number of parameters and keywords.
One typical idiom to recover keyword args is as follows:
def f1(param1, **kwparams):
my_kw1 = kwparams['kw1']
---- operate with my_kw1 ------
In this way your function can be called with any number of params and it uses those it needs.
This type or arguments are frecuently used in some GUI code like wxPython class definition and subclassing as well as for function currying, decorators, etc
* and ** in function call
* and ** params in a function call are unpacked when taken by the function:
def func(a, b, c, d):
print(a, b, c, d)
args = (2, 3)
kwargs = {'d': 4}
func(1, *args, **kwargs)
### returns ---> 1 2 3 4
Great!
In theory you could use inspect.getargspec(fringe) to find out what arguments the method takes. That will tell you the number of arguments you could pass, but it's very messy:
argspec = inspect.getargspec(fringe.push)
if len(argspec.args) >= 3 or argspec.varargs or argspec.keywords:
fringe.push(item, priority)
else:
fringe.push(item)
Much simpler just to go for it and ask forgiveness if necessary:
try:
fringe.push(item, priority)
except TypeError:
fringe.push(item)
Better still of course to make sure that the push() methods all have a consistent argument spec, but if you can't do that then use the try...except.
try below code snippets,
def push(item, priority=None):
print item,priority
args = (1,)
push(*args)
args = (1,2)
push(*args)
Can't you just use a default argument value?
>>> def foo(a, b = 10):
... print a, b
...
>>> foo(1000)
1000 10
>>> foo(1000, 1000)
1000 1000
>>>
If the argument b is not provided, it defaults to 10.

Ignore python multiple return value

Say I have a Python function that returns multiple values in a tuple:
def func():
return 1, 2
Is there a nice way to ignore one of the results rather than just assigning to a temporary variable? Say if I was only interested in the first value, is there a better way than this:
x, temp = func()
You can use x = func()[0] to return the first value, x = func()[1] to return the second, and so on.
If you want to get multiple values at a time, use something like x, y = func()[2:4].
One common convention is to use a "_" as a variable name for the elements of the tuple you wish to ignore. For instance:
def f():
return 1, 2, 3
_, _, x = f()
If you're using Python 3, you can you use the star before a variable (on the left side of an assignment) to have it be a list in unpacking.
# Example 1: a is 1 and b is [2, 3]
a, *b = [1, 2, 3]
# Example 2: a is 1, b is [2, 3], and c is 4
a, *b, c = [1, 2, 3, 4]
# Example 3: b is [1, 2] and c is 3
*b, c = [1, 2, 3]
# Example 4: a is 1 and b is []
a, *b = [1]
The common practice is to use the dummy variable _ (single underscore), as many have indicated here before.
However, to avoid collisions with other uses of that variable name (see this response) it might be a better practice to use __ (double underscore) instead as a throwaway variable, as pointed by ncoghlan. E.g.:
x, __ = func()
Remember, when you return more than one item, you're really returning a tuple. So you can do things like this:
def func():
return 1, 2
print func()[0] # prints 1
print func()[1] # prints 2
The best solution probably is to name things instead of returning meaningless tuples (unless there is some logic behind the order of the returned items). You can for example use a dictionary:
def func():
return {'lat': 1, 'lng': 2}
latitude = func()['lat']
You could even use namedtuple if you want to add extra information about what you are returning (it's not just a dictionary, it's a pair of coordinates):
from collections import namedtuple
Coordinates = namedtuple('Coordinates', ['lat', 'lng'])
def func():
return Coordinates(lat=1, lng=2)
latitude = func().lat
If the objects within your dictionary/tuple are strongly tied together then it may be a good idea to even define a class for it. That way you'll also be able to define more complex operations. A natural question that follows is: When should I be using classes in Python?
Most recent versions of python (≥ 3.7) have dataclasses which you can use to define classes with very few lines of code:
from dataclasses import dataclass
#dataclass
class Coordinates:
lat: float = 0
lng: float = 0
def func():
return Coordinates(lat=1, lng=2)
latitude = func().lat
The primary advantage of dataclasses over namedtuple is that its easier to extend, but there are other differences. Note that by default, dataclasses are mutable, but you can use #dataclass(frozen=True) instead of #dataclass to force them being immutable.
Here is a video that might help you pick the right data class for your use case.
Three simple choices.
Obvious
x, _ = func()
x, junk = func()
Hideous
x = func()[0]
And there are ways to do this with a decorator.
def val0( aFunc ):
def pick0( *args, **kw ):
return aFunc(*args,**kw)[0]
return pick0
func0= val0(func)
This seems like the best choice to me:
val1, val2, ignored1, ignored2 = some_function()
It's not cryptic or ugly (like the func()[index] method), and clearly states your purpose.
If this is a function that you use all the time but always discard the second argument, I would argue that it is less messy to create an alias for the function without the second return value using lambda.
def func():
return 1, 2
func_ = lambda: func()[0]
func_() # Prints 1
This is not a direct answer to the question. Rather it answers this question: "How do I choose a specific function output from many possible options?".
If you are able to write the function (ie, it is not in a library you cannot modify), then add an input argument that indicates what you want out of the function. Make it a named argument with a default value so in the "common case" you don't even have to specify it.
def fancy_function( arg1, arg2, return_type=1 ):
ret_val = None
if( 1 == return_type ):
ret_val = arg1 + arg2
elif( 2 == return_type ):
ret_val = [ arg1, arg2, arg1 * arg2 ]
else:
ret_val = ( arg1, arg2, arg1 + arg2, arg1 * arg2 )
return( ret_val )
This method gives the function "advanced warning" regarding the desired output. Consequently it can skip unneeded processing and only do the work necessary to get your desired output. Also because Python does dynamic typing, the return type can change. Notice how the example returns a scalar, a list or a tuple... whatever you like!
When you have many output from a function and you don't want to call it multiple times, I think the clearest way for selecting the results would be :
results = fct()
a,b = [results[i] for i in list_of_index]
As a minimum working example, also demonstrating that the function is called only once :
def fct(a):
b=a*2
c=a+2
d=a+b
e=b*2
f=a*a
print("fct called")
return[a,b,c,d,e,f]
results=fct(3)
> fct called
x,y = [results[i] for i in [1,4]]
And the values are as expected :
results
> [3,6,5,9,12,9]
x
> 6
y
> 12
For convenience, Python list indexes can also be used :
x,y = [results[i] for i in [0,-2]]
Returns : a = 3 and b = 12
It is possible to ignore every variable except the first with less syntax if you like. If we take your example,
# The function you are calling.
def func():
return 1, 2
# You seem to only be interested in the first output.
x, temp = func()
I have found the following to works,
x, *_ = func()
This approach "unpacks" with * all other variables into a "throwaway" variable _. This has the benefit of assigning the one variable you want and ignoring all variables behind it.
However, in many cases you may want an output that is not the first output of the function. In these cases, it is probably best to indicate this by using the func()[i] where i is the index location of the output you desire. In your case,
# i == 0 because of zero-index.
x = func()[0]
As a side note, if you want to get fancy in Python 3, you could do something like this,
# This works the other way around.
*_, y = func()
Your function only outputs two potential variables, so this does not look too powerful until you have a case like this,
def func():
return 1, 2, 3, 4
# I only want the first and last.
x, *_, d = func()

Categories

Resources