Ways to define and use partially bound functions - python

The two ways I'm aware of to have a partially-bound function that can be later called is:
apply_twice = lambda f: lambda x: f(f(x))
square2x = apply_twice(lambda x: x*x)
square2x(2)
# 16
And
def apply_twice(f):
def apply(x):
return f(f(x))
return apply
square_2x=apply_twice(lambda x: x*x)
square_2x(4)
# 256
Are there any other common ways to pass around or use partially-bound functions?

functools.partial can be used to partially apply an ordinary Python function. This is especially useful if you already have a regular function and want to apply only some of the arguments.
from functools import partial
def apply_twice(f, x):
return f(f(x))
square2x = partial(apply_twice, lambda x: x*x)
print(square2x(4))
It's also important to remember that functions are only one type of callable in Python, and we're free to define callables ourselves as ordinary user-defined classes. So if you have some complex operation that you want to behave like a function, you can always write a class, which lets you document in more detail what it is and what the different parts mean.
class MyApplyTwice:
def __init__(self, f):
self.f = f
def __call__(self, x):
return self.f(self.f(x))
square2x = MyApplyTwice(lambda x: x*x)
print(square2x(4))
While overly verbose in this example, it can be helpful to write your function out as a class if it's going to be storing state long-term or might be doing confusing mutable things with its state. It's also useful to keep in mind for learning purposes, as it's a healthy reminder that closures and objects are two sides of the same coin. They're really the same thing, viewed in a different light.

You can also do this with functools.partial():
def apply_twice(f, x):
return f(f(x))
square_2x = functools.partial(apply_twice, lambda x: x*x)

This isn't really partial binding, assuming you mean partial application.
Partial application is when you create a function that does the same thing as another function by fixing some number of its arguments, producing a function of smaller arity (the arity of a function is the number of arugments it takes).
So, for example,
def foo(a, b, c):
return a + b + c
A partially applied version of foo would be something like:
def partial_foo(a, b):
return foo(a, b, 42)
Or, with a lambda expression:
partial_foo = lambda a, b: foo(a, b, 42)
However, note, the above goes against the official style guidelines, in PEP8, you shouldn't assign the result of lambda expressions to a name, if you are going to do that just use a full function defintion.
The module, functools, has a helper for partial application:
import functools
partial_foo = functools.partial(foo, c=42)
Note, you may have heard about "currying", which sometimes gets confused for partial application. Currying is when you decompose a n-arity function into N, 1-arity functions. So, more concretely, for foo:
curried_foo = lambda a: lambda b: lambda c: a + b + c
Or in long form:
def curried_foo(a):
def _curr0(b):
def _curr1(c):
return a + b + c
return _curr1
return _curr0
And the important part, curried_foo(1)(2)(3) == foo(1, 2, 3)

Related

Is there a better way than this to write Python functions that "depend on parameters"?

Consider the Python function line defined as follows:
def line(m, b):
def inner_function(x):
return m * x + b
return inner_function
This function has the property that for any floats m and b, the object line(m, b) is a Python function, and when line(m, b) is called on a float x, it returns a float line(m, b)(x). The float line(m, b)(x) can be interpreted as the value of the line with slope m and y-intercept b at the point x. This is one method for writing a Python function that "depends on parameters" m and b.
Is there a special name for this method of writing a Python function that depends on some parameters?
Is there a more Pythonic and/or computationally efficient way to write a function that does the same thing as line above?
This is called a closure, and it's a perfectly reasonable way to write one, as well as one of the most efficient means of doing so (in the CPython reference interpreter anyway).
The only other common pattern I know of is the equivalent of C++'s functors, where a class has the state as attributes, and the additional parameters are passed to __call__, e.g. to match your case:
class Line:
def __init__(self, m, b):
self.m = m
self.b = b
def __call__(self, x):
return self.m * x + self.b
It's used identically, either creating/storing an instance and reusing it, or as in your example, creating it, using it once, and throwing it away (Line(m, b)(x)). Functors are slower than closures though (as attribute access is more expensive than reading from nested scope, at least in the CPython reference interpreter), and as you can see, they're more verbose as well, so I'd generally recommend the closure unless your needs require the greater flexibility/power of class instances.
I support #ShaddowRanger's answer. But using partial is another nice approach.
import functools
def f(m, b, x):
return m * x + b
line = functools.partial(f, 2, 3)
line(5)
=> 13
One thing which is worth pointing out is that lambda objects, and OP's inner_function aren't pickleable, whereas line here, as well as #ShaddowRanger's Line objects are, which makes them a bit more useful.
This is a little shorter:
def line(m,b):
return lambda x: m*x+b;

Closure after function definition

Is it possible to define a closure for a function which is already defined?
For example I'd like to have a "raw" function and a function which already has some predefined values set by a surrounding closure.
Here is some code showing what I can do with a closure to add predefined variables to a function definition:
def outer(a, b, c):
def fun(d):
print(a + b + c - d)
return fun
foo = outer(4, 5, 6)
foo(10)
Now I want to have a definition of fun outside of a wrapping closure function, to be able to call fun either with variables from a closure or by passing variables directly. I know that I need to redefine a function to make it usable in a closure, thus I tried using lambda for it:
def fun(a, b, c, d): # raw function
print(a + b + c - d)
def clsr(func): # make a "closure" decorator
def wrap(*args):
return lambda *args: func(*args)
return wrap
foo = clsr(fun)(5, 6, 7) # make a closure with values already defined
foo(10) # raises TypeError: fun() missing 3 required positional arguments: 'a', 'b', and 'c'
fun(5, 6, 7, 10) # prints 8
What I also tried is using wraps from functools, but I was not able to make it work.
But is this even possible? And if yes: Is there any module which already implements decorators for this?
You can just define the wrap on the fly:
def fun(a, b, c, d): # raw function
print(a + b + c - d)
def closed(d): fun(5,6,7,d)
closed(10)
You can use this with lambda, but #juanpa points out you should not if there is no reason to. The above code will result in 8. This method by the way is not Python specific, most languages would support this.
But if you need a closure in a sense that it relies on the wrapper variables, than no, and there is good reason not to. This will create essentially a non-working function, that relies on wrapping. In this case using a class maybe better:
class fun:
def __init__(self,*args): #Can use specific things, not just *args.
self.args = args #Or meaningful names
def __call__(self,a, b, c, d): # raw function
print(a + b + c - d,self.args)
def closed(d):
fun("some",3,"more",['args'])(5,6,7,d)
closed(10)
or using *args/**kwargs directly and passing extra variables through that. Otherwise I am not familiar with a "inner function" construct that only works after wrapping.

Producing combinations of lambda functions compositions

I am facing a challenging issue in order to make my Python3 code more elegant.
Suppose I have a number function with variable number of different inputs, for example something like this:
def fun1(a,b):
return a+b
def fun2(c,d,e):
return c*d + e
def fun3(x):
return x*x
These functions needs to be agglomerated in a single function that needs to be used as the optimization function of a numerical solver.
However I need to create different combinations of various operations with these functions, like for example multiplying the output of the first two functions and summing by the third.
The manual solution is to create a specific lambda function:
fun = lambda x : fun1(x[0],x[1])*fun2(x[2],x[3],x[4]) + fun3(x[4])
but the number of functions I have is large and I need to produce all the possibile combinations of them.
I would like to systematically be able to compose these functions and always knowing the mapping from the arguments of higher level function fun to the lower level arguments of each single function.
In this case I manually specified that x[0] corresponds to the argument a of fun1, x[1] corresponds to argument b of fun1 etcetera.
Any idea?
It sounds like you are trying to do what is known as symbolic regression. This problem is often solved via some variation on genetic algorithms which encode the functional relationships in the genes and then optimise based on a fitness function which includes the prediction error as well as a term which penalises more complicated relationships.
Here are two libraries which solve this problem for you:
GPLearn
dcgpy
The following classes provide a rudimentary way of composing functions and keeping track of the number of arguments each one requires, which appears to be the main problem you have:
class Wrapper:
def __init__(self, f):
self.f = f
self.n = f.__code__.co_argcount
def __call__(self, x):
return self.f(*x)
def __add__(self, other):
return Add(self, other)
def __mul__(self, other):
return Mul(self, other)
class Operator:
def __init__(self, left, right):
self.left = left
self.right = right
self.n = left.n + right.n
class Mul(Operator):
def __call__(self, x):
return self.left(x[:self.left.n]) * self.right(x[self.left.n:])
class Add(Operator):
def __call__(self, x):
return self.left(x[:self.left.n]) + self.right(x[self.left.n:])
To use them, you first create wrappers for each of your functions:
w1 = Wrapper(fun1)
w2 = Wrapper(fun2)
w3 = Wrapper(fun3)
Then you can add and multiply the wrappers to get a new function-like object:
(w1 + w2*w3)([1, 2, 3, 4, 5, 6])
This could be a solution:
def fun1(a,b):
return a+b
def fun2(c,d,e):
return c+d+e
def compose(f1,f2):
n1 = len(f1.__code__.co_varnames)
n2 = len(f2.__code__.co_varnames)
F1 = lambda x : f1(*[x[i] for i in range(0,n1)])*f2(*[x[i] for i in range(n1,n1+n2)])
return F1
print(compose(fun1,fun2)([1,2,3,4,5]))

From Haskell to Python: how to do currying?

I recently started coding in Python and I was wondering if it's possible to return a function that specializes another function.
For example, in Haskell you can create a function that adds 5 to any given number like this:
sumFive = (+5)
Is it somehow possible in Python?
I think the other answers are misunderstanding the question. I believe the OP is asking about partial application of a function, in his example the function is (+).
If the goal isn't partial application, the solution is as simple as:
def sumFive(x): return x + 5
For partial application in Python, we can use this function: https://docs.python.org/2/library/functools.html#functools.partial
def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = keywords.copy()
newkeywords.update(fkeywords)
return func(*(args + fargs), **newkeywords)
newfunc.func = func
newfunc.args = args
newfunc.keywords = keywords
return newfunc
Then, we must turn the + operator into a function (I don't believe there's a lightweight syntax to do so like in Haskell):
def plus(x, y): return x + y
Finally:
sumFive = partial(plus, 5)
Not nearly as nice as in Haskell, but it works:
>>> sumFive(7)
12
Python's design does not naturally support the evaluation of a multi-variable function into a sequence of single-variable functions (currying). As other answers point out, the related (but distinct) concept of partial application is more straightforward to do using partial from the functools module.
However, the PyMonad library supplies you with the tools to make currying possible in Python, providing a "collection of classes for programming with functors, applicative functors and monads."
Use the curry decorator to decorate a function that accepts any number of arguments:
from pymonad import curry
#curry
def add(x, y):
return x + y
It is then very easy to curry add. The syntax is not too dissimilar to Haskell's:
>>> add5 = add(5)
>>> add5(12)
17
Note that here the add and add5 functions are instances of PyMonad's Reader monad class, not a normal Python function object:
>>> add
<pymonad.Reader.Reader at 0x7f7024ccf908>
This allows, for example, the possibility of using simpler syntax to compose functions (easy to do in Haskell, normally much less so in Python).
Finally, it's worth noting that the infix operator + is not a Python function: + calls into the left-hand operand's __add__ method, or the right-hand operand's __radd__ method and returns the result. You'll need to decorate these class methods for the objects you're working with if you want to curry using + (disclaimer: I've not tried to do this yet).
Yup. Python supports lambda expressions:
sumFive = lambda x: x + 5
for i in range(5):
print sumFive(i),
#OUTPUT 5,6,7,8,9
Python functions can return functions, allowing you to create higher-order functions. For example, here is a higher-order function which can specialize a function of two variables:
def specialize(f,a,i):
def g(x):
if i == 0:
return f(a,x)
else:
return f(x,a)
return g
Used like this:
>>> def subtract(x,y): return x - y
>>> f = specialize(subtract,5,0)
>>> g = specialize(subtract,5,1)
>>> f(7)
-2
>>> g(7)
2
But -- there is really no need to reinvent the wheel, the module functools has a number of useful higher-order functions that any Haskell programmer would find useful, including partial for partial function application, which is what you are asking about.
As it was pointed out, python does have lambda functions, so the following does solve the problem:
# Haskell: sumFive = (+5)
sumFive = lambda x : x + 5
I think this is more useful with the fact that python has first class functions (1,2)
def summation(n, term):
total, k = 0, 1
while k <= n:
total, k = total + term(k), k + 1
return total
def identity(x):
return x
def sum_naturals(n):
return summation(n, identity)
sum_naturals(10) # Returns 55
# Now for something a bit more complex
def pi_term(x):
return 8 / ((4*x-3) * (4*x-1))
def pi_sum(n):
return summation(n, pi_term)
pi_sum(1e6) # returns: 3.141592153589902
You can find more on functional programming and python here
For the most generic Haskell style currying, look at partial from the functools module.

Function closure vs. callable class

In many cases, there are two implementation choices: a closure and a callable class. For example,
class F:
def __init__(self, op):
self.op = op
def __call__(self, arg1, arg2):
if (self.op == 'mult'):
return arg1 * arg2
if (self.op == 'add'):
return arg1 + arg2
raise InvalidOp(op)
f = F('add')
or
def F(op):
if op == 'or':
def f_(arg1, arg2):
return arg1 | arg2
return f_
if op == 'and':
def g_(arg1, arg2):
return arg1 & arg2
return g_
raise InvalidOp(op)
f = F('add')
What factors should one consider in making the choice, in either direction?
I can think of two:
It seems a closure would always have better performance (can't
think of a counterexample).
I think there are cases when a closure cannot do the job (e.g., if
its state changes over time).
Am I correct in these? What else could be added?
Closures are faster. Classes are more flexible (i.e. more methods available than just __call__).
I realize this is an older posting, but one factor I didn't see listed is that in Python (pre-nonlocal) you cannot modify a local variable contained in the referencing environment. (In your example such modification is not important, but technically speaking the lack of being able to modify such a variable means it's not a true closure.)
For example, the following code doesn't work:
def counter():
i = 0
def f():
i += 1
return i
return f
c = counter()
c()
The call to c above will raise a UnboundLocalError exception.
This is easy to get around by using a mutable, such as a dictionary:
def counter():
d = {'i': 0}
def f():
d['i'] += 1
return d['i']
return f
c = counter()
c() # 1
c() # 2
but of course that's just a workaround.
Please note that because of an error previously found in my testing code, my original answer was incorrect. The revised version follows.
I made a small program to measure running time and memory consumption. I created the following callable class and a closure:
class CallMe:
def __init__(self, context):
self.context = context
def __call__(self, *args, **kwargs):
return self.context(*args, **kwargs)
def call_me(func):
return lambda *args, **kwargs: func(*args, **kwargs)
I timed calls to simple functions accepting different number of arguments (math.sqrt() with 1 argument, math.pow() with 2 and max() with 12).
I used CPython 2.7.10 and 3.4.3+ on Linux x64. I was only able to do memory profiling on Python 2. The source code I used is available here.
My conclusions are:
Closures run faster than equivalent callable classes: about 3 times faster on Python 2, but only 1.5 times faster on Python 3. The narrowing is both because closure became slower and callable classes slower.
Closures take less memory than equivalent callable classes: roughly 2/3 of the memory (only tested on Python 2).
While not part of the original question, it's interesting to note that the run time overhead for calls made via a closure is roughly the same as a call to math.pow(), while via a callable class it is roughly double that.
These are very rough estimates, and they may vary with hardware, operating system and the function you're comparing it too. However, it gives you an idea about the impact of using each kind of callable.
Therefore, this supports (conversely to what I've written before), that the accepted answer given by #RaymondHettinger is correct, and closures should be preferred for indirect calls, at least as long as it doesn't impede on readability. Also, thanks to #AXO for pointing out the mistake in my original code.
I consider the class approach to be easier to understand at one glance, and therefore, more maintainable. As this is one of the premises of good Python code, I think that all things being equal, one is better off using a class rather than a nested function. This is one of the cases where the flexible nature of Python makes the language violate the "there should be one, and preferably only one, obvious way of doing something" predicate for coding in Python.
The performance difference for either side should be negligible - and if you have code where performance matters at this level, you certainly should profile it and optimize the relevant parts, possibly rewriting some of your code as native code.
But yes, if there was a tight loop using the state variables, assessing the closure variables should be slight faster than assessing the class attributes. Of course, this would be overcome by simply inserting a line like op = self.op inside the class method, before entering the loop, making the variable access inside the loop to be made to a local variable - this would avoid an attribute look-up and fetching for each access. Again, performance differences should be negligible, and you have a more serious problem if you need this little much extra performance and are coding in Python.
Mr. Hettinger's answer still is true ten years later in Python3.10. For anyone wondering:
from timeit import timeit
class A: # Naive class
def __init__(self, op):
if op == "mut":
self.exc = lambda x, y: x * y
elif op == "add":
self.exc = lambda x, y: x + y
def __call__(self, x, y):
return self.exc(x,y)
class B: # More optimized class
__slots__ = ('__call__')
def __init__(self, op):
if op == "mut":
self.__call__ = lambda x, y: x * y
elif op == "add":
self.__call__ = lambda x, y: x + y
def C(op): # Closure
if op == "mut":
def _f(x,y):
return x * y
elif op == "add":
def _f(x,t):
return x + y
return _f
a = A("mut")
b = B("mut")
c = C("mut")
print(timeit("[a(x,y) for x in range(100) for y in range(100)]", globals=globals(), number=10000))
# 26.47s naive class
print(timeit("[b(x,y) for x in range(100) for y in range(100)]", globals=globals(), number=10000))
# 18.00s optimized class
print(timeit("[c(x,y) for x in range(100) for y in range(100)]", globals=globals(), number=10000))
# 12.12s closure
Using closure seems to offer significant speed gains in cases where the call number is high. However, classes have extensive customization and are superior choice at times.
I'd re-write class example with something like:
class F(object):
__slots__ = ('__call__')
def __init__(self, op):
if op == 'mult':
self.__call__ = lambda a, b: a * b
elif op == 'add':
self.__call__ = lambda a, b: a + b
else:
raise InvalidOp(op)
That gives 0.40 usec/pass (function 0.31, so it 29% slower) at my machine with Python 3.2.2. Without using object as a base class it gives 0.65 usec/pass (i.e. 55% slower than object based). And by some reason code with checking op in __call__ gives almost the same results as if it was done in __init__. With object as a base and check inside __call__ gives 0.61 usec/pass.
The reason why would you use classes might be polymorphism.
class UserFunctions(object):
__slots__ = ('__call__')
def __init__(self, name):
f = getattr(self, '_func_' + name, None)
if f is None: raise InvalidOp(name)
else: self.__call__ = f
class MyOps(UserFunctions):
#classmethod
def _func_mult(cls, a, b): return a * b
#classmethod
def _func_add(cls, a, b): return a + b

Categories

Resources