How to multiply functions in python? - python

def sub3(n):
return n - 3
def square(n):
return n * n
It's easy to compose functions in Python:
>>> my_list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> [square(sub3(n)) for n in my_list]
[9, 4, 1, 0, 1, 4, 9, 16, 25, 36]
Unfortunately, to use the composition as a key it's awkward, you have to use them in another function which calls both functions in turn:
>>> sorted(my_list, key=lambda n: square(sub3(n)))
[3, 2, 4, 1, 5, 0, 6, 7, 8, 9]
This should really just be sorted(my_list, key=square*sub3), because heck, function __mul__ isn't used for anything else anyway:
>>> square * sub3
TypeError: unsupported operand type(s) for *: 'function' and 'function'
Well let's just define it then!
>>> type(sub3).__mul__ = 'something'
TypeError: can't set attributes of built-in/extension type 'function'
D'oh!
>>> class ComposableFunction(types.FunctionType):
... pass
...
TypeError: Error when calling the metaclass bases
type 'function' is not an acceptable base type
D'oh!
class Hack(object):
def __init__(self, function):
self.function = function
def __call__(self, *args, **kwargs):
return self.function(*args, **kwargs)
def __mul__(self, other):
def hack(*args, **kwargs):
return self.function(other(*args, **kwargs))
return Hack(hack)
Hey, now we're getting somewhere..
>>> square = Hack(square)
>>> sub3 = Hack(sub3)
>>> [square(sub3(n)) for n in my_list]
[9, 4, 1, 0, 1, 4, 9, 16, 25, 36]
>>> [(square*sub3)(n) for n in my_list]
[9, 4, 1, 0, 1, 4, 9, 16, 25, 36]
>>> sorted(my_list, key=square*sub3)
[3, 2, 4, 1, 5, 0, 6, 7, 8, 9]
But I don't want a Hack callable class! The scoping rules are different in ways I don't fully understand, and it's arguably even uglier than just using the "lameda". Is it possible to get composition working directly with functions somehow?

You can use your hack class as a decorator pretty much as it's written, though you'd likely want to choose a more appropriate name for the class.
Like this:
class Composable(object):
def __init__(self, function):
self.function = function
def __call__(self, *args, **kwargs):
return self.function(*args, **kwargs)
def __mul__(self, other):
#Composable
def composed(*args, **kwargs):
return self.function(other(*args, **kwargs))
return composed
def __rmul__(self, other):
#Composable
def composed(*args, **kwargs):
return other(self.function(*args, **kwargs))
return composed
You can then decorate your functions like so:
#Composable
def sub3(n):
return n - 3
#Composable
def square(n):
return n * n
And compose them like so:
(square * sub3)(n)
Basically it's the same thing you've accomplished using your hack class, but using it as a decorator.

Python does not (and likely will never) have support for function composition either at the syntactic level or as a standard library function. There are various 3rd party modules (such as functional) that provide a higher-order function that implements function composition.

Maybe something like this:
class Composition(object):
def __init__(self, *args):
self.functions = args
def __call__(self, arg):
result = arg
for f in reversed(self.functions):
result = f(result)
return result
And then:
sorted(my_list, key=Composition(square, sub3))

You can compose functions using SSPipe library:
from sspipe import p, px
sub3 = px - 3
square = px * px
composed = sub3 | square
print(5 | composed)

Related

Calling recursiviley a function depending on number of args in python

I have a function which takes two parameters and perform a binary operation:
def foo(arg1,arg2):
return operation(arg1,arg2)
I need to generalize this function such that if three args are passed it returns operation(arg1,operation(arg2,arg3)), if four are provided operation(arg1,operation(arg2,operation(arg3,arg4))) and so on. Is it possible to do that in python?
You can do this using the *args form of declaring a function; check if the length of the arguments is 2 and if so return the value of the operation, otherwise return the value of the operation of the first argument with foo of the remaining arguments:
def operation(arg1, arg2):
return arg1 + arg2
def foo(*args):
if (len(args) == 2):
return operation(*args)
return operation(args[0], foo(*args[1:]))
print(foo(1, 3))
print(foo(2, 3, 5))
print(foo(1, 2, 3, 4, 5, 6, 7))
Output:
4
10
28
Note you may also want to check if 0 or 1 arguments are passed to prevent "index out of range" errors. For 1 argument you could just return the input value e.g.
if (len(args) == 1):
return args[0]
As pointed out by #wallefan in the comments, there is a standard library function for this: functools.reduce. You can use that like this:
from functools import reduce
print(reduce(operation, (1, 3)))
print(reduce(operation, (2, 3, 5)))
print(reduce(operation, (1, 2, 3, 4, 5, 6, 7)))
The output is the same as the foo function above.
Yes, and in fact it's built into the standard library: https://docs.python.org/3/library/functools.html#functools.reduce
import functools
def operation(a,b):
return a + b
# returns 15
functools.reduce(operation, (1, 2, 3, 4, 5))
If you'd like, you can combine this with varargs mentioned in Nick's answer:
import functools
def operation(a,b):
return a + b
def foo(*args):
return functools.reduce(operation, args)
# returns 15
foo(1,2,3,4,5)

Python list operations with elements

I was trying to operate with a list and a loop. The thing is that I have a list like the following a = [9, 3, 5, 2] and I want to subtract 1 to each element... So I have tried something like this
a = [9, 3, 5, 2]
b = -1
x = a - b
Somewhat beyond the scope of your actual question but you could use some magic functions to abstract away the details:
class MyCoolList(list):
def __sub__(self, other):
return [item - other for item in self]
def __add__(self, other):
return [item + other for item in self]
def __mul__(self, other):
return [item * other for item in self]
Now we can do:
cls = MyCoolList([9, 3, 5, 2])
print(cls - 1)
print(cls + 1)
print(cls * 2)
Which yields
[8, 2, 4, 1]
[10, 4, 6, 3]
[18, 6, 10, 4]
To not repeat yourself (DRY), you may very well use the operator module:
import operator as op
class MyCoolList(list):
def calc(self, what, other):
return [what(item, other) for item in self]
def __sub__(self, other):
return self.calc(op.sub, other)
def __add__(self, other):
return self.calc(op.add, other)
def __mul__(self, other):
return self.calc(op.mul, other)
In the end, you could use a decorator altogether:
import operator as op
def calc(operator_function):
def real_decorator(function):
def wrapper(*args, **kwargs):
lst, other = args
return [operator_function(item, other) for item in lst]
return wrapper
return real_decorator
class MyCoolList(list):
#calc(op.sub)
def __sub__(self, other):
pass
#calc(op.add)
def __add__(self, other):
pass
#calc(op.mul)
def __mul__(self, other):
pass
cls = MyCoolList([9, 3, 5, 2])
print(cls - 1)
print(cls + 1)
use list comprehension
a = [9, 3, 5, 2]
b = [x-1 for x in a]
output:
[8, 2, 4, 1]
Simple one liner using lambda and map
a = [9, 3, 5, 2]
x = list(map(lambda i: i-1, a))
print(x)
If you are new to python here is the simplest one
a = [9, 3, 5, 2]
b=[]
for i in a:
b.append(i-1)
print(b)
OUTPUT
[8, 2, 4, 1]

How do I uncurry a function in Python?

Recently, I have studied 'Programming language' using standard ML, and I've learned currying method(or something), so I applied it in Python.
The below is simple function and currying.
def range_new(x, y):
return [i for i in range(x, y+1)]
def curry_2(f):
return lambda x: lambda y: f(x, y)
def uncurry_2(f):
pass # I don't know it...
print(range_new(1, 10))
curried_range = curry_2(range_new)
countup = curried_range(1)
print(countup(10))
print(curried_range(1)(10))
The result is below. And it works well; with curry_2 we can make a new function(countup). But, then I want to make an uncurried function.
However, I don't know how I can make it.
How can I do it?
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
The easiest solution is to wrap the curried function again with code that uncurries it:
def uncurry_2(f):
return lambda x, y: f(x)(y)
uncurried_range = uncurry_2(curried_range)
print(uncurried_range(1, 10))
It's not exactly good style but you can access the variables in the closure using the (maybe CPython-only) __closure__ attribute of the returned lambda:
>>> countup.__closure__[0].cell_contents
<function __main__.range_new>
This accesses the content of the innermost closure (the variable used in the innermost lambda) of your function curry_2 and thus returns the function you used there.
However in production code you shouldn't use that. It would be better to create a class (or function) for currying that supports accessing the uncurried function (which is something lambda does not provide). However some functools in Python support accessing the "decorated" function, for example partial:
>>> from functools import partial
>>> countup = partial(range_new, 1)
>>> print(countup(10))
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> countup.func
<function __main__.range_new>
I believe by uncurry you mean you'd like to allow the function to accept more arguments. Have you considered using the "partial" function? It allows you to use as many arguments as desired when calling the method.
from functools import partial
def f(a, b, c, d):
print(a, b, c, d)
g = partial(partial(f, 1, 2), 3)
g(4)
Implementing it should be pretty straight forward
def partial(fn, *args):
def new_func(*args2):
newArgs = args + args2
fn(*newArgs)
return new_func;
Note both the code presented in the original question, and the code above is known as partial application. Currying is more flexible than this typically - here's how you can do it with Python 3 (it is more tricky in Python 2).
def curry(fn, *args1):
current_args = args1
sig = signature(fn)
def new_fn(*args2):
nonlocal current_args
current_args += args2
if len(sig.parameters) > len(current_args):
return new_fn
else:
return fn(*current_args)
return new_fn
j = curry(f)
j(1)(2, 3)(4)
Now back to your code. range_new can now be used in a few new ways:
print(range_new(1, 10))
curried_range = curry(range_new)
countup = curried_range(1)
print(countup(10))
countup_again = curried_range
print(countup_again(1, 10))

Python yield a list with generator

I was getting confused by the purpose of "return" and "yield"
def countMoreThanOne():
return (yy for yy in xrange(1,10,2))
def countMoreThanOne():
yield (yy for yy in xrange(1,10,2))
What is the difference on the above function?
Is it impossible to access the content inside the function using yield?
In first you return a generator
from itertools import chain
def countMoreThanOne():
return (yy for yy in xrange(1,10,2))
print list(countMoreThanOne())
>>>
[1, 3, 5, 7, 9]
while in this you are yielding a generator so that a generator within the generator
def countMoreThanOne():
yield (yy for yy in xrange(1,10,2))
print list(countMoreThanOne())
print list(chain.from_iterable(countMoreThanOne()))
[<generator object <genexpr> at 0x7f0fd85c8f00>]
[1, 3, 5, 7, 9]
if you use list comprehension then difference can be clearly seen:-
in first:-
def countMoreThanOne():
return [yy for yy in xrange(1,10,2)]
print countMoreThanOne()
>>>
[1, 3, 5, 7, 9]
def countMoreThanOne1():
yield [yy for yy in xrange(1,10,2)]
print countMoreThanOne1()
<generator object countMoreThanOne1 at 0x7fca33f70eb0>
>>>
After reading your other comments I think you should write the function like this:
def countMoreThanOne():
return xrange(1, 10, 2)
>>> print countMoreThanOne()
xrange(1, 11, 2)
>>> print list(countMoreThanOne())
[1, 3, 5, 7, 9]
or even better, to have some point in making it a function:
def oddNumbersLessThan(stop):
return xrange(1, stop, 2)
>>> print list(oddNumbersLessThan(15))
[1, 3, 5, 7, 9, 11, 13]

Setting a subsection or slice of a global numpy array through a python object

I am trying to reference a slice of a "global" numpy array via a object attribute. Here is what I think the class structure would be like and it's use case.
import numpy
class X:
def __init__(self, parent):
self.parent = parent
self.pid = [0, 1, 2]
def __getattr__(self, name):
if name == 'values':
return self.parent.P[self.pid]
else:
raise AttributeError
class Node:
def __init__(self):
self.P = numpy.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
self._values = X(self)
def __getattr__(self, name):
if name == 'x':
return self._values.values
else:
raise AttributeError
Here is the use case:
>>> n = Node()
>>> print n.P
[ 1 2 3 4 5 6 7 8 9 10]
>>> print n.x
[1 2 3]
>>> print n.x[1:3]
[2 3]
Which works fine, now I would like to assign values to n.P through the n.x attribute by,
>>> n.x = numpy.array([11, 12, 13])
to get,
>>> print n.P
[ 11 12 13 4 5 6 7 8 9 10]
Or assign values to slices by,
>>> n.x[1:3] = numpy.array([77, 88])
to get,
>>> print n.P
[ 11 77 88 4 5 6 7 8 9 10]
But for the life of me, I'm struggling to get this assignment working. I thought it would be easy using __setattr__ and __setitem__, but a whole day later I still haven't managed it.
Ultimately, n.x will be returned as a multi-dimensional array where the X class will reshape is on return, but is stored in a n.P which is a vector. I have removed this to simplify the problem.
I would love some help on this. Has anyone done this before? Or suggest how to do this?
Thanks in advance for your help.
SOLUTION
So after many days of stumbling around I found a solution. I suspect this can be simplified and refined. The solution is to create a X object in your Node object. When it's retrieved, it returns temporary numpy object (Values) with knowledge of it's parent node and pids. The setslice_ function is defined in this update the global P array with the new values. If the X object is is assigned, it doesn't return a Values object but sets the global P values directly.
Two points, which may be invalid: 1. The Node and X objects had to be a sub-class of object; 2. if setting a higher dimension array, you need to use __setitem__ instead, which won't work on 1D arrays or lists.
As I said I suspect this code can be improves since I'm not sure I fully understand it. I am happy to take improvements and suggestions.
Thanks for your help, especially Bago.
Here is my final code.
import numpy
class Values(numpy.ndarray):
def __new__(cls, input_array, node, pids):
obj = numpy.asarray(input_array).view(cls)
obj.node = node
obj.pids = pids
return obj
def __setslice__(self, i, j, values):
self.node._set_values(self.pids[i:j], values)
class X(object):
def __get__(self, instance, owner):
p = instance.P[instance.pids]
return Values(p, instance, instance.pids)
def __set__(self, instance, values):
instance.P[instance.pids] = values
class Node(object):
x = X()
def __init__(self, pids=[0, 1, 2]):
self.P = numpy.arange(11)
self.pids = pids
def _set_values(self, pids, values):
self.P[pids] = values
node = Node(pids=[4, 5, 6, 7])
print '\nInitial State:'
print 'P =', node.P
print 'x =', node.x
print 'x[1:3] =', node.x[1:3]
print '\nSetting node.x = [44, 55, 66, 77]:'
node.x = [44, 55, 66, 77]
print 'P =', node.P
print 'x =', node.x
print 'x[1:3] =', node.x[1:3]
print '\nSetting node.x[1:3] = [100, 200]:'
node.x[1:3] = [100, 200]
print 'P =', node.P
print 'x =', node.x
print 'x[1:3] =', node.x[1:3]
It's not clear to me what's not working, but I think maybe you're trying to do something like this:
import numpy
class X(object):
def __init__(self, parent):
self.parent = parent
self.pid = [0, 1, 2]
#property
def values(self):
tmp = self.parent.P[self.pid]
return tmp
#values.setter
def values(self, input):
self.parent.P[self.pid] = input
class Node(object):
def __init__(self):
self.P = numpy.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
self._values = X(self)
#property
def x(self):
return self._values.values
#x.setter
def x(self, input):
self._values.values = input
I hope that get's you started.
update
The reason that n.x[1:3] = [77, 88] doesn't work using this approach is because n.x and n.x[:] = ~ both call the get method of X.values which returns tmp. But tmp is a copy of part of P and after n.x[:] = ~ tmp is thrown away and P is not updated. tmp is a copy because when you index an array with another array you get a copy not a view. Here is an example to make that more clear, you can read more about numpy slicing/indexing here.
>>> P = np.arange(10)
>>> pid = np.array([1, 2, 3])
>>> Q = P[pid]
>>> Q[:] = 99
>>> P
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> R = P[1:4]
>>> R[:] = 99
>>> P
array([ 0, 99, 99, 99, 4, 5, 6, 7, 8, 9])
>>> P[[1,2]][:] = 88
>>> P
array([ 0, 99, 99, 99, 4, 5, 6, 7, 8, 9])
setitem won't help, because you're calling the setitem method of the tmp not X.
The easiest way to make it work is to replace the pid array with a slice, but I know that's kind of limiting. You could also keep track of the tmp array, have a self._tmp so you can move the values from _tmp to P later. I know neither of those are perfect, but maybe someone else here will come up with a better approach. Sorry I couldn't do more.

Categories

Resources