I have two functions, f and g. Both have the same signature: (x). I want to create a new function, z, with the same signature:
def z(x):
return f(x) * g(x)
except that I'd like to be able to write
z = f * g instead of the above code. Is it possible?
Something close is possible:
z = lambda x: f(x) * g(x)
Personally, I find this way more intuitive than z = f * g, because mathematically, multiplying functions doesn't mean anything. Depending on the interpretation of the * operator, it may mean composition so z(x) = f(g(x)), but definitely not multiplication of the results of invocation. On the other hand, the lambda above is very explicit, and frankly requires just a bit more characters to write.
Update: Kudos to JBernardo for hacking it together. I was imagining it would be much more hacky than in turned out. Still, I would advise against using this in real code.
The funny thing is that it is quite possible. I made a project some days ago to do things like that.
Here it is: FuncBuilder
By now you can only define variables, but you can use my metaclass with the help of some other functions to build a class to what you want.
Problems:
It's slow
It's really slow
You think you want that but describing functions the way they meant to be described is the right way.
You should use your first code.
Just as a proof of concept:
from funcbuilder import OperatorMachinery
class FuncOperations(metaclass=OperatorMachinery):
def __init__(self, function):
self.func = function
def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)
def func(self, *n, oper=None):
if not n:
return type(self)(lambda x: oper(self.func(x)))
return type(self)(lambda x: oper(self.func(x), n[0](x)))
FuncOperations.apply_operators([func, func])
Now you can code like that:
#FuncOperations
def f(x):
return x + 1
#FuncOperations
def g(x):
return x + 2
And the desired behavior is:
>>> z = f * g
>>> z(3)
20
I added a better version of it on the FuncBuilder project. It works with any operation between a FuncOperation object and another callable. Also works on unary operations. :D
You can play with it to make functions like:
z = -f + g * h
I can be done with the exact syntax you intended (though using lambda might be better), by using a decorator. As stated, functions don't have operators defined for them, but objects can be made to be callable just like functions in Python --
So the decorator bellow just wraps the function in an object for which the multiplication for another function is defined:
class multipliable(object):
def __init__(self, func):
self.func = func
def __call__(self, *args, **kw):
return self.func(*args, **kw)
def __mul__(self, other):
#multipliable
def new_func(*args, **kw):
return self.func(*args, **kw) * other(*args, **kw)
return new_func
#multipliable
def x():
return 2
(tested in Python 2 and Python 3)
def y():
return 3
z = x * y
z()
Related
Is it possible to output a mathematical function directly from the function implementation ?
class MyFunction:
def __init__(self, func):
self.func = func
def math_representation(self):
# returns a string representation of self.func
f = lambda x: 3*x**2
myFunc = MyFunction(f)
print(myFunc.math_reprentation()) #prints something like 3*x**2
Of course constructing the object with the representation as a parameter is possible, and is a trivial solution. But the idea is to generate this representation.
I could also build the function with objects representing the math operations, but the idea is to do it on a regular (lambda) function.
I really don't see a way for this to happen, but I'm curious.
Thanks for any help and suggestion
As I said, you can use SymPy if you want this to be more complex, but for simple functions (and trusted inputs), you could do something like this:
class MathFunction(object):
def __init__(self, code):
self.code = code
self._func = eval("lambda x: " + code)
def __call__(self, arg):
return self._func(arg)
def __repr__(self):
return "f(x) = " + self.code
You can use it like this:
>>> sq = MathFunction("x**2")
>>> sq
f(x) = x**2
>>> sq(7)
49
This is a bit restricted, of course (only using the variable called "x", and only one parameter), but it can be, of course, expanded.
I recently started coding in Python and I was wondering if it's possible to return a function that specializes another function.
For example, in Haskell you can create a function that adds 5 to any given number like this:
sumFive = (+5)
Is it somehow possible in Python?
I think the other answers are misunderstanding the question. I believe the OP is asking about partial application of a function, in his example the function is (+).
If the goal isn't partial application, the solution is as simple as:
def sumFive(x): return x + 5
For partial application in Python, we can use this function: https://docs.python.org/2/library/functools.html#functools.partial
def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = keywords.copy()
newkeywords.update(fkeywords)
return func(*(args + fargs), **newkeywords)
newfunc.func = func
newfunc.args = args
newfunc.keywords = keywords
return newfunc
Then, we must turn the + operator into a function (I don't believe there's a lightweight syntax to do so like in Haskell):
def plus(x, y): return x + y
Finally:
sumFive = partial(plus, 5)
Not nearly as nice as in Haskell, but it works:
>>> sumFive(7)
12
Python's design does not naturally support the evaluation of a multi-variable function into a sequence of single-variable functions (currying). As other answers point out, the related (but distinct) concept of partial application is more straightforward to do using partial from the functools module.
However, the PyMonad library supplies you with the tools to make currying possible in Python, providing a "collection of classes for programming with functors, applicative functors and monads."
Use the curry decorator to decorate a function that accepts any number of arguments:
from pymonad import curry
#curry
def add(x, y):
return x + y
It is then very easy to curry add. The syntax is not too dissimilar to Haskell's:
>>> add5 = add(5)
>>> add5(12)
17
Note that here the add and add5 functions are instances of PyMonad's Reader monad class, not a normal Python function object:
>>> add
<pymonad.Reader.Reader at 0x7f7024ccf908>
This allows, for example, the possibility of using simpler syntax to compose functions (easy to do in Haskell, normally much less so in Python).
Finally, it's worth noting that the infix operator + is not a Python function: + calls into the left-hand operand's __add__ method, or the right-hand operand's __radd__ method and returns the result. You'll need to decorate these class methods for the objects you're working with if you want to curry using + (disclaimer: I've not tried to do this yet).
Yup. Python supports lambda expressions:
sumFive = lambda x: x + 5
for i in range(5):
print sumFive(i),
#OUTPUT 5,6,7,8,9
Python functions can return functions, allowing you to create higher-order functions. For example, here is a higher-order function which can specialize a function of two variables:
def specialize(f,a,i):
def g(x):
if i == 0:
return f(a,x)
else:
return f(x,a)
return g
Used like this:
>>> def subtract(x,y): return x - y
>>> f = specialize(subtract,5,0)
>>> g = specialize(subtract,5,1)
>>> f(7)
-2
>>> g(7)
2
But -- there is really no need to reinvent the wheel, the module functools has a number of useful higher-order functions that any Haskell programmer would find useful, including partial for partial function application, which is what you are asking about.
As it was pointed out, python does have lambda functions, so the following does solve the problem:
# Haskell: sumFive = (+5)
sumFive = lambda x : x + 5
I think this is more useful with the fact that python has first class functions (1,2)
def summation(n, term):
total, k = 0, 1
while k <= n:
total, k = total + term(k), k + 1
return total
def identity(x):
return x
def sum_naturals(n):
return summation(n, identity)
sum_naturals(10) # Returns 55
# Now for something a bit more complex
def pi_term(x):
return 8 / ((4*x-3) * (4*x-1))
def pi_sum(n):
return summation(n, pi_term)
pi_sum(1e6) # returns: 3.141592153589902
You can find more on functional programming and python here
For the most generic Haskell style currying, look at partial from the functools module.
I'm trying to construct a decorator in Python where I add a variable at the decoration stage. I know how to write a decorator where I simply run a function on the results of another function, but I'm having trouble with the syntax of adding an additional variable. Essentially, I want to take this dot product function:
def dot(x,y):
temp1=[]
for i in range(len(x)):
temp1.append(float(x[i])*y[i])
tempdot=sum(temp1)
return tempdot
and subtract the value 'b' from the result, all in one larger function given parameters x,y,b
Am I trying to misuse the decoration functionality in this case? Thanks.
import functools
def subtracter(b):
def wrapped(func):
#functools.wraps(func)
def decorated_func(*args, **kwargs):
return func(*args, **kwargs) - b
return decorated_func
return wrapped
Then use it as
#subtracter(b=5)
def dot(x,y):
temp1=[]
for i in range(len(x)):
temp1.append(float(x[i])*y[i])
tempdot=sum(temp1)
return tempdot
By the way your dot function can be shorted with a generator expression like so:
def dot(x, y):
return sum(float(x)*y for x, y in zip(x, y))
In many cases, there are two implementation choices: a closure and a callable class. For example,
class F:
def __init__(self, op):
self.op = op
def __call__(self, arg1, arg2):
if (self.op == 'mult'):
return arg1 * arg2
if (self.op == 'add'):
return arg1 + arg2
raise InvalidOp(op)
f = F('add')
or
def F(op):
if op == 'or':
def f_(arg1, arg2):
return arg1 | arg2
return f_
if op == 'and':
def g_(arg1, arg2):
return arg1 & arg2
return g_
raise InvalidOp(op)
f = F('add')
What factors should one consider in making the choice, in either direction?
I can think of two:
It seems a closure would always have better performance (can't
think of a counterexample).
I think there are cases when a closure cannot do the job (e.g., if
its state changes over time).
Am I correct in these? What else could be added?
Closures are faster. Classes are more flexible (i.e. more methods available than just __call__).
I realize this is an older posting, but one factor I didn't see listed is that in Python (pre-nonlocal) you cannot modify a local variable contained in the referencing environment. (In your example such modification is not important, but technically speaking the lack of being able to modify such a variable means it's not a true closure.)
For example, the following code doesn't work:
def counter():
i = 0
def f():
i += 1
return i
return f
c = counter()
c()
The call to c above will raise a UnboundLocalError exception.
This is easy to get around by using a mutable, such as a dictionary:
def counter():
d = {'i': 0}
def f():
d['i'] += 1
return d['i']
return f
c = counter()
c() # 1
c() # 2
but of course that's just a workaround.
Please note that because of an error previously found in my testing code, my original answer was incorrect. The revised version follows.
I made a small program to measure running time and memory consumption. I created the following callable class and a closure:
class CallMe:
def __init__(self, context):
self.context = context
def __call__(self, *args, **kwargs):
return self.context(*args, **kwargs)
def call_me(func):
return lambda *args, **kwargs: func(*args, **kwargs)
I timed calls to simple functions accepting different number of arguments (math.sqrt() with 1 argument, math.pow() with 2 and max() with 12).
I used CPython 2.7.10 and 3.4.3+ on Linux x64. I was only able to do memory profiling on Python 2. The source code I used is available here.
My conclusions are:
Closures run faster than equivalent callable classes: about 3 times faster on Python 2, but only 1.5 times faster on Python 3. The narrowing is both because closure became slower and callable classes slower.
Closures take less memory than equivalent callable classes: roughly 2/3 of the memory (only tested on Python 2).
While not part of the original question, it's interesting to note that the run time overhead for calls made via a closure is roughly the same as a call to math.pow(), while via a callable class it is roughly double that.
These are very rough estimates, and they may vary with hardware, operating system and the function you're comparing it too. However, it gives you an idea about the impact of using each kind of callable.
Therefore, this supports (conversely to what I've written before), that the accepted answer given by #RaymondHettinger is correct, and closures should be preferred for indirect calls, at least as long as it doesn't impede on readability. Also, thanks to #AXO for pointing out the mistake in my original code.
I consider the class approach to be easier to understand at one glance, and therefore, more maintainable. As this is one of the premises of good Python code, I think that all things being equal, one is better off using a class rather than a nested function. This is one of the cases where the flexible nature of Python makes the language violate the "there should be one, and preferably only one, obvious way of doing something" predicate for coding in Python.
The performance difference for either side should be negligible - and if you have code where performance matters at this level, you certainly should profile it and optimize the relevant parts, possibly rewriting some of your code as native code.
But yes, if there was a tight loop using the state variables, assessing the closure variables should be slight faster than assessing the class attributes. Of course, this would be overcome by simply inserting a line like op = self.op inside the class method, before entering the loop, making the variable access inside the loop to be made to a local variable - this would avoid an attribute look-up and fetching for each access. Again, performance differences should be negligible, and you have a more serious problem if you need this little much extra performance and are coding in Python.
Mr. Hettinger's answer still is true ten years later in Python3.10. For anyone wondering:
from timeit import timeit
class A: # Naive class
def __init__(self, op):
if op == "mut":
self.exc = lambda x, y: x * y
elif op == "add":
self.exc = lambda x, y: x + y
def __call__(self, x, y):
return self.exc(x,y)
class B: # More optimized class
__slots__ = ('__call__')
def __init__(self, op):
if op == "mut":
self.__call__ = lambda x, y: x * y
elif op == "add":
self.__call__ = lambda x, y: x + y
def C(op): # Closure
if op == "mut":
def _f(x,y):
return x * y
elif op == "add":
def _f(x,t):
return x + y
return _f
a = A("mut")
b = B("mut")
c = C("mut")
print(timeit("[a(x,y) for x in range(100) for y in range(100)]", globals=globals(), number=10000))
# 26.47s naive class
print(timeit("[b(x,y) for x in range(100) for y in range(100)]", globals=globals(), number=10000))
# 18.00s optimized class
print(timeit("[c(x,y) for x in range(100) for y in range(100)]", globals=globals(), number=10000))
# 12.12s closure
Using closure seems to offer significant speed gains in cases where the call number is high. However, classes have extensive customization and are superior choice at times.
I'd re-write class example with something like:
class F(object):
__slots__ = ('__call__')
def __init__(self, op):
if op == 'mult':
self.__call__ = lambda a, b: a * b
elif op == 'add':
self.__call__ = lambda a, b: a + b
else:
raise InvalidOp(op)
That gives 0.40 usec/pass (function 0.31, so it 29% slower) at my machine with Python 3.2.2. Without using object as a base class it gives 0.65 usec/pass (i.e. 55% slower than object based). And by some reason code with checking op in __call__ gives almost the same results as if it was done in __init__. With object as a base and check inside __call__ gives 0.61 usec/pass.
The reason why would you use classes might be polymorphism.
class UserFunctions(object):
__slots__ = ('__call__')
def __init__(self, name):
f = getattr(self, '_func_' + name, None)
if f is None: raise InvalidOp(name)
else: self.__call__ = f
class MyOps(UserFunctions):
#classmethod
def _func_mult(cls, a, b): return a * b
#classmethod
def _func_add(cls, a, b): return a + b
Assuming I have a decorator and a wrapped function like this:
def squared(method):
def wrapper(x, y):
return method(x*x, y*y)
return wrapper
#squared
def sum(x, y):
return x+y
I have other code that would like to call the undecorated version of the sum function. Is there an import trick that can get me to this unwrapped method? If my code says from some.module.path import sum then, I get the wrapped version of the sum method, which is not what I want in this case. (Yes, I know I could break this out into a helper method, but that breaks some of the cleanliness of the pattern I'm going for here.)
I'm okay with adding extra "magic" to the decorator to provide some alternate symbol name (like orig_sum) that I could then import, I just don't know how to do that.
def unwrap(fn):
return fn.__wrapped__
def squared(method):
def wrapper(x, y):
return method(x*x, y*y)
wrapper.__wrapped__ = method
return wrapper
#squared
def sum(x, y):
return x+y
sum(2,3) -> 13
unwrap(sum)(2,3) -> 5
What about this?
def squared(method):
def wrapper(x, y):
return method(x*x, y*y)
return wrapper
def sum(x, y):
return x+y
squared_sum = squared(sum)
It's still a decorator, but you still can import squared and sum without any magic. Not sure if that's what you meant by 'helper method', but I find this much cleaner than a sum method, which actually sums the squares of its inputs.