I would like to use functools.partial to set a certain argument to a constant and at the same time remove the argument altogether.
Let me explain it using a simple example.
from functools import partial
def f(a, b):
return a * b
g = partial(f, b=2)
However, this function g still has the following calling signature:
g?
Signature: g(a, *, b=1)
Call signature: g(*args, **kwargs)
Type: partial
String form: functools.partial(<function f at 0x7ff7045289d8>, b=1)
File: /opt/conda/envs/dev/lib/python3.6/functools.py
Docstring:
partial(func, *args, **keywords) - new function with partial application
of the given arguments and keywords.
I could of course do this with a lambda function like:
def f(a, b):
return a * b
g = lambda a: f(a, b=2)
with the correct calling signature:
g?
Signature: g(a)
Docstring: <no docstring>
File: ~/Work/<ipython-input-7-fc5f3f492590>
Type: function
The downside of using lamdba functions is that I would need to write down all the arguments again. In my simple example this doesn't matter, but look at this:
phase_func = lambda site1, site2, B_x, B_y, B_z, orbital, e, hbar: \
phase_func(site1, site2, B_x, B_y, B_z, orbital, e, hbar, xyz_offset=(0,0,0))
# or this
phase_func = partial(phase_func, xyz_offset=(0,0,0)
Why would I even want this?
Later in my code I use wrappers that can generate a new function by multiplying two other functions, like combine(function1, function2, operator.mul). This combine function looks at all the arguments, therefore I need to remove the argument after setting it.
Related
suppose I define a function of an unpackable array (e.g. list)
def f(params):
a,b,c = params
return a+b**2+c**2
is there a way to integrate over only the first parameter? I tried
import scipy.integrate.quad as quad
def integrate(b,c):
return quad(f,0,1,args=(b,c))
but it returns an error because the function f only takes one positional argument. I guess I have to unpack it in the integration step, but unsure how.
Thanks!
If these are the only two use cases for using this function you can do something like the following:
def f(*args):
if len(args) == 1:
a,b,c = args[0]
else:
a, b,c = args
return a+b**2+c**2
This assumes that if you pass one argument to the function in can be unpacked and if you pass more than one you pass a, b, c directly.
In a more generic setting, you can define different behavior of a function depending on the input type using functools.singledispatch. In this case this would look as follows:
from functools import singledispatch
#singledispatch
def f(arg1, arg2=None, arg3=None):
a, b, c = arg1
return a+b**2+c**2
#f.register(int)
#f.register(float)
def _(arg1, arg2, arg3):
return arg1+arg2**2+arg3**2
The advantage of this would be that you can extend it for different input types if that is needed.
I want to pass a function, f(a=1,b=2) into g and use the 'a' value in g
def f(a,b): pass
def g(f): #print f.a
g(f(1,2)) should result in an output of 1
I looked into the inspect module but can't seem to get hold of f in g
This is as far as my programming knowledge has got me :
def g(f):
print(list(inspect.signature(f).parameters.keys()))
g(f(1,2)) results in: TypeError: None is not a callable object
This is not possible. When you call g(f(1,2)), f(1,2) is finished before g runs. The only way this would be possible would be for f to return its arguments.
You need to call g appropriately, so the f, as a function, is an argument:
g(f, 1, 2)
Now work that into your function:
def g(f, *args):
print(list(inspect.signature(f).parameters.keys()))
... and from here, you can iterate through args to make the proper call to f.
Does that get you moving?
You could do something like this:
def f(a, b):
print(a)
print(b)
Then define wrapper as:
def wrapper(func, a, b):
func(a, b)
Or if you want more flexibility use *args:
def wrapper(func, *args):
func(*args)
That flexibility comes with some risk if the number of arguments don't match. Which means you'll need to take care that all func passed to wrapper have a consistent signature.
You could use **kwargs which would help with the above, then:
def wrapper(func, **kwargs):
func(**kwargs)
Then calls to wrapper would look like:
wrapper(f, a=1, b=2)
This would allow for more flexibility and signature of func could vary as needed.
You could turn wrapper into a decorator but that's another question.
Is it possible to define a closure for a function which is already defined?
For example I'd like to have a "raw" function and a function which already has some predefined values set by a surrounding closure.
Here is some code showing what I can do with a closure to add predefined variables to a function definition:
def outer(a, b, c):
def fun(d):
print(a + b + c - d)
return fun
foo = outer(4, 5, 6)
foo(10)
Now I want to have a definition of fun outside of a wrapping closure function, to be able to call fun either with variables from a closure or by passing variables directly. I know that I need to redefine a function to make it usable in a closure, thus I tried using lambda for it:
def fun(a, b, c, d): # raw function
print(a + b + c - d)
def clsr(func): # make a "closure" decorator
def wrap(*args):
return lambda *args: func(*args)
return wrap
foo = clsr(fun)(5, 6, 7) # make a closure with values already defined
foo(10) # raises TypeError: fun() missing 3 required positional arguments: 'a', 'b', and 'c'
fun(5, 6, 7, 10) # prints 8
What I also tried is using wraps from functools, but I was not able to make it work.
But is this even possible? And if yes: Is there any module which already implements decorators for this?
You can just define the wrap on the fly:
def fun(a, b, c, d): # raw function
print(a + b + c - d)
def closed(d): fun(5,6,7,d)
closed(10)
You can use this with lambda, but #juanpa points out you should not if there is no reason to. The above code will result in 8. This method by the way is not Python specific, most languages would support this.
But if you need a closure in a sense that it relies on the wrapper variables, than no, and there is good reason not to. This will create essentially a non-working function, that relies on wrapping. In this case using a class maybe better:
class fun:
def __init__(self,*args): #Can use specific things, not just *args.
self.args = args #Or meaningful names
def __call__(self,a, b, c, d): # raw function
print(a + b + c - d,self.args)
def closed(d):
fun("some",3,"more",['args'])(5,6,7,d)
closed(10)
or using *args/**kwargs directly and passing extra variables through that. Otherwise I am not familiar with a "inner function" construct that only works after wrapping.
Python and Excel have a different behaviour for defaults. Python is passing optional arguments by keyword, while Excel is only positional, even on defaults. As a result, an unused argument is in reality passed, as None. Assuming as example the scipy function brentq:
brentq(f, a, b, xtol=1e-12, rtol=4.4408920985006262e-16, maxiter=100, full_output=False, disp=True, *args)
calling it from Excel with xtol and rtol unset:
brentq(f,a,f,,,50)
will in reality be seen as
brentq(f,a,f,None,None,50)
and of course Python will not like None for xtol,rtol.
As an escamotage, till now, I have a function checking for default values:
def checkvals(f, args):
a = inspect.getargspec(f)
defs = zip(a.args[-len(a.defaults):], a.defaults)
for x in defs:
key = x[0]
if args[key] == None:
args[key] = x[1]
return args
and I wrap brentq as follows:
def brentq(f, a, b, xtol=1e-12, rtol=4.4408920985006262e-16, maxiter=100, full_output=False, disp=True, *args):
x = checkvals(brentq, locals())
return scipy.optimize.brentq (f, a, b, *args, x['xtol'], x['rtol'], x['maxiter'], x['full_output'], x['disp'])
It works, meaning that x['xtol'] and x['rtol'] are restored to their defaults. However, I was wondering if there is a better way to do it.
In other words: is it possible to modify locals() inside a function, and force the function to use the modified values?
If Excel passes None when you skip an argument, then put that in your Python function signature as the default argument. In fact, this is the Pythonic way to do it (for various reasons including default-argument mutability that aren't worth getting into here).
brentq(f, a, b, xtol=None, rtol=None, maxiter=100, full_output=None, disp=None, *args):
if xtol is None: xtol = 1e-12
if rtol is None: rtol = 4.4408920985006262e-16
if full_output is None: full_output = False
if disp is None: disp = true
# ... rest of function
I'm satisfied with final Kyle's suggestion. Following it the excel-python interface would be like (example tried in spyder):
import scipy.optimize
def f(x):
return -2*x**4 + 2*x**3 + -16*x**2 + -60*x + 100
#this is the wrapper for Excel
def brentqWrapper (f, a, b, xtol=None, rtol=None, maxiter=None, full_output=None, disp=None,*args):
return scipy.optimize.brentq (**{k: v for k, v in locals().items() if v!=None})
#this simulates the excel call with unset inner optional parameters:
print brentqWrapper(f,0,2,None,None,50)
>> 1.24078711375
I have a Python code that uses the following functions:
def func1(arguments a, b, c):
def func2(arguments d, e, f):
def func3(arguments g, h, i):
Each of the above functions configures a CLI command on a product.
In addition, for each of the above functions, there is the de-init function which deletes the CLI command.
def de_init_func1(arguments x, y, z):
def de_init_func2(arguments l, m, n):
def de_init_func3(arguments q, r, s):
Suppose I have a script which configures lots of CLI commands using the functions func1, func2 and func3, and before the script completes, the script should remove all the CLI commands that it configured.
For this to happen, each time func1/2/3 is invoked, I need to add the equivalent de_init_func CALL to a list, so by the end of the script, I can iterate this list, and invoke the de-init methods one by one.
How can I add a "func1(arguments) call" to a list without invoking it while adding it to the list.
If I will just add the func1(arguments) call as a string "func1(arguments)", once I will iterate the list, I won`t be able to invoke the function calls because interpreter will refer to the list items as strings and not as function calls...
At the simplest level, you can simply use tuples for referencing function calls. For example, a call to func1(a, b, c) would be referenced by the tuple (func1, a, b, c). You can then safely put those tuples in a list.
To execute later the function represented by such a tuple (say t), simply use :
t[0](*t[1:])
That is : call the function in t[0] with the arguments in the remaining of typle.
How can I add a "func1(arguments) call" to a list without invoking it while adding it to the list.
There are at least two ways:
def test(x):
print "arg was "+str(x)
toDoList = []
args = ["hello"]
toDoList.append(lambda:test(*args))
# doesn't run yet
# run it
for f in toDoList:
f()
If you think you might want to inspect or change the args before running, this next one is better:
def test(x):
print "arg was "+str(x)
toDoList = []
args = ["hello"]
toDoList.append({'f': test, 'a': args})
# to run
for item in toDoList:
item['f'](*item['a'])
I'd suggest using functools.partial:
from functools import partial
L.append(partial(de_init_func1, x, y, z))
L.append(partial(de_init_func2, l, m, n))
for f in L:
f()
This supports kwargs if needed. Also, if there are shared arguments for the functions or some are unknown in the beginning, you can postpone their passing until the final call.
I think you should use class ; funcx as contructor (__init__) and de_init_funcx as destructor (__del__).
class Class1:
def __init__( self, arguments, a, b ,c ):
...
def __del__( self, arguments, x, y, z ):
...
You can do this:
setup = [func1, func2, func3]
teardown = [de_init_func1, de_init_func2, de_init_func3]
map(lambda func: func(arguments), setup) # calls all the setup functions
map(lambda func: func(arguments, teardown) # calls all the teardown functions