How do I uncurry a function in Python? - python

Recently, I have studied 'Programming language' using standard ML, and I've learned currying method(or something), so I applied it in Python.
The below is simple function and currying.
def range_new(x, y):
return [i for i in range(x, y+1)]
def curry_2(f):
return lambda x: lambda y: f(x, y)
def uncurry_2(f):
pass # I don't know it...
print(range_new(1, 10))
curried_range = curry_2(range_new)
countup = curried_range(1)
print(countup(10))
print(curried_range(1)(10))
The result is below. And it works well; with curry_2 we can make a new function(countup). But, then I want to make an uncurried function.
However, I don't know how I can make it.
How can I do it?
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

The easiest solution is to wrap the curried function again with code that uncurries it:
def uncurry_2(f):
return lambda x, y: f(x)(y)
uncurried_range = uncurry_2(curried_range)
print(uncurried_range(1, 10))

It's not exactly good style but you can access the variables in the closure using the (maybe CPython-only) __closure__ attribute of the returned lambda:
>>> countup.__closure__[0].cell_contents
<function __main__.range_new>
This accesses the content of the innermost closure (the variable used in the innermost lambda) of your function curry_2 and thus returns the function you used there.
However in production code you shouldn't use that. It would be better to create a class (or function) for currying that supports accessing the uncurried function (which is something lambda does not provide). However some functools in Python support accessing the "decorated" function, for example partial:
>>> from functools import partial
>>> countup = partial(range_new, 1)
>>> print(countup(10))
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> countup.func
<function __main__.range_new>

I believe by uncurry you mean you'd like to allow the function to accept more arguments. Have you considered using the "partial" function? It allows you to use as many arguments as desired when calling the method.
from functools import partial
def f(a, b, c, d):
print(a, b, c, d)
g = partial(partial(f, 1, 2), 3)
g(4)
Implementing it should be pretty straight forward
def partial(fn, *args):
def new_func(*args2):
newArgs = args + args2
fn(*newArgs)
return new_func;
Note both the code presented in the original question, and the code above is known as partial application. Currying is more flexible than this typically - here's how you can do it with Python 3 (it is more tricky in Python 2).
def curry(fn, *args1):
current_args = args1
sig = signature(fn)
def new_fn(*args2):
nonlocal current_args
current_args += args2
if len(sig.parameters) > len(current_args):
return new_fn
else:
return fn(*current_args)
return new_fn
j = curry(f)
j(1)(2, 3)(4)
Now back to your code. range_new can now be used in a few new ways:
print(range_new(1, 10))
curried_range = curry(range_new)
countup = curried_range(1)
print(countup(10))
countup_again = curried_range
print(countup_again(1, 10))

Related

How to Pass a unpack iterator as an Argument to a Function?

I have a question about passing arguments in a function.
In python when you want to unpack an iterable obj on the right-hand side, you have to do it within some context, like a tuple or a set or etc.
For example, you can not say:
a, b = *(1, 2)
and you should say:
a, b = (*(1, 2),) // a=1, b=2
or you can say:
a, b = {*(1, 2)} //a=1, b=2 or a=2, b=1
Am I rigth?
But when you want to unpack iterable and then pass it as arguments in a function, you do not need any context at all and you just unpack your iterable object.
For example:
def f(param1, param2):
pass
f(*(1, 2))
and you do not need to use some kind of context like before. For example, you do not say:
f({*(1, 2)}) // it will be f({1, 2})
I think we don't use {} or any other context in this case because we were looking for pass 2 values as arguments in our f function. Thus, I assume we have to say f(*(1, 2)) not f({*(1, 2)}) .
If I was right, Could you please explain more about how f(*(1, 2)) is worked without using any context under the hood?
f(*(1, 2)) does have context! The context is the list (and dictionary) of function arguments itself. In essence, in a function call f(...) the stuff between the parentheses can be seen as a hybrid between a tuple and a dictionary literal (only using x=4 instead of 'x': 4), which then gets passed to the function:
>>> def f(*args, **kwargs):
... print(args, kwargs)
>>> l = [10, 11]; d = {'a': 12, 'b': 13}
>>> f(1, *l, 2, 3, x=4, y=5, **d, z=6)
(1, 10, 11, 2, 3) {'x': 4, 'y': 5, 'a': 12, 'b': 13, 'z': 6}
Viewed like this it makes perfect sense that you can unpack sequences and dictionaries into this context.

Calling recursiviley a function depending on number of args in python

I have a function which takes two parameters and perform a binary operation:
def foo(arg1,arg2):
return operation(arg1,arg2)
I need to generalize this function such that if three args are passed it returns operation(arg1,operation(arg2,arg3)), if four are provided operation(arg1,operation(arg2,operation(arg3,arg4))) and so on. Is it possible to do that in python?
You can do this using the *args form of declaring a function; check if the length of the arguments is 2 and if so return the value of the operation, otherwise return the value of the operation of the first argument with foo of the remaining arguments:
def operation(arg1, arg2):
return arg1 + arg2
def foo(*args):
if (len(args) == 2):
return operation(*args)
return operation(args[0], foo(*args[1:]))
print(foo(1, 3))
print(foo(2, 3, 5))
print(foo(1, 2, 3, 4, 5, 6, 7))
Output:
4
10
28
Note you may also want to check if 0 or 1 arguments are passed to prevent "index out of range" errors. For 1 argument you could just return the input value e.g.
if (len(args) == 1):
return args[0]
As pointed out by #wallefan in the comments, there is a standard library function for this: functools.reduce. You can use that like this:
from functools import reduce
print(reduce(operation, (1, 3)))
print(reduce(operation, (2, 3, 5)))
print(reduce(operation, (1, 2, 3, 4, 5, 6, 7)))
The output is the same as the foo function above.
Yes, and in fact it's built into the standard library: https://docs.python.org/3/library/functools.html#functools.reduce
import functools
def operation(a,b):
return a + b
# returns 15
functools.reduce(operation, (1, 2, 3, 4, 5))
If you'd like, you can combine this with varargs mentioned in Nick's answer:
import functools
def operation(a,b):
return a + b
def foo(*args):
return functools.reduce(operation, args)
# returns 15
foo(1,2,3,4,5)

Understanding closure scope in Python

This is example from Bratt Slatkin's book
def sort_priority(values, group):
def helper(x):
if x in group:
return (0, x)
return (1, x)
values.sort(key=helper)
Furthermore they gave these values
numbers = [8, 3, 1, 2, 5, 4, 7, 6]
group = {2, 3, 5, 7}
sort_priority(numbers, group)
print(numbers)
And we have
[2, 3, 5, 7, 1, 4, 6, 8]
I do not understand this example.Why do we have return two times and what does helper function actually do?
You read the function as:
def helper(x):
if x in group:
return (0, x)
else:
return (1, x)
Or, more concisely,
def helper(x):
return (x not in group, x)
The intuition behind this is that sort accepts a key callback which is called on each element. For each element, helper is invoked which returns a tuple (could be either (0, x) or (1, x) depending on whether x exists in the VIP list).
You should understand that tuples are sorted based on multiple predicates, meaning both items in the tuples are considered when deciding the order of elements. This would imply that elements for which group returns (0, x) will be ordered first compared to those returning (1, x) because 0 < 1.
After this, we have two groups, those with first element 0 and those with first element 1. All 0 group elements will come first, but the order of those elements depends on the second item in the tuples - x. And similar for 1 group elements.
For your input:
Group0: [2, 3, 5, 7]
Group1: [8, 1, 4, 6]
Ordering within Group0: [2, 3, 5, 7]
Ordering within Group1: [1, 4, 6, 8]
Overall ordering: [Group0, Group1]
Result: [2, 3, 5, 7, 1, 4, 6, 8]
Why do we have return two times?
This has nothing to do with closures or nested functions.
def helper(x):
if x in group:
return (0, x)
return (1, x)
Can be written as
def helper(x):
if x in group:
return (0, x)
else:
return (1, x)
Either way, the return value depends on what the if statement is evaluated to.
If it is True then (0, x) will be returned. If it is False then (1, x) will be returned.
Note that the first return statement is within the if block. In python whenever a function encounters a return statement, the execution is handed back to the caller
In your example, the two returns are just a shortcut way to avoid if else statements. When a particular value is in the group, (0,x) is returned and if the if condition is not satisfied, then (1,x) is returned.
It's a bit easier to understand the code when it's written without nested functions:
def helper(x):
global group
if x in group:
return 0, x
return 1, x
def sort_priority(values):
values.sort(key=helper)
numbers = [8, 3, 1, 2, 5, 4, 7, 6]
group = {2, 3, 5, 7}
sort_priority(numbers)
print(numbers)
Now it's easy to see that sort_priority() simply sorts the values by calling the helper function which creates an order by assigning a value to each x.
When the helper function is called with a value that's in group - it gets "lower" priority (zero) while if the value is not in group, it gets higher priority (one).
A closer look at helper indeed shows:
def helper(x):
global group
if x in group:
return 0, x # <-- we're in the `if` so `x` gets zero
return 1, x # <-- if we got here it means we didn't get into the `if` so `x` gets one
So by using the helper as a key function in the sorting, we'll get and ordered lists which puts the items that are in group first and only then the items that are not in group:
[2, 3, 5, 7, 1, 4, 6, 8]
^
The first item that is not in group
It is more obvious for me to use sorted(values) function instead of values.sort(), otherwise it is little ambiguous "what is returned?", "how actually helper is used?".
def sort_priority(values, group):
def helper(x):
if x in group:
return (0, x)
return (1, x)
sorted_values = sorted(values, key=helper)
return sorted_values
numbers = [8, 3, 1, 2, 5, 4, 7, 6]
group = {2, 3, 5, 7}
print('Sorted Numbers List: ', sort_priority(numbers, group))
Of course after sorted() is used, sorted list returned it explicitly.

How to multiply functions in python?

def sub3(n):
return n - 3
def square(n):
return n * n
It's easy to compose functions in Python:
>>> my_list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> [square(sub3(n)) for n in my_list]
[9, 4, 1, 0, 1, 4, 9, 16, 25, 36]
Unfortunately, to use the composition as a key it's awkward, you have to use them in another function which calls both functions in turn:
>>> sorted(my_list, key=lambda n: square(sub3(n)))
[3, 2, 4, 1, 5, 0, 6, 7, 8, 9]
This should really just be sorted(my_list, key=square*sub3), because heck, function __mul__ isn't used for anything else anyway:
>>> square * sub3
TypeError: unsupported operand type(s) for *: 'function' and 'function'
Well let's just define it then!
>>> type(sub3).__mul__ = 'something'
TypeError: can't set attributes of built-in/extension type 'function'
D'oh!
>>> class ComposableFunction(types.FunctionType):
... pass
...
TypeError: Error when calling the metaclass bases
type 'function' is not an acceptable base type
D'oh!
class Hack(object):
def __init__(self, function):
self.function = function
def __call__(self, *args, **kwargs):
return self.function(*args, **kwargs)
def __mul__(self, other):
def hack(*args, **kwargs):
return self.function(other(*args, **kwargs))
return Hack(hack)
Hey, now we're getting somewhere..
>>> square = Hack(square)
>>> sub3 = Hack(sub3)
>>> [square(sub3(n)) for n in my_list]
[9, 4, 1, 0, 1, 4, 9, 16, 25, 36]
>>> [(square*sub3)(n) for n in my_list]
[9, 4, 1, 0, 1, 4, 9, 16, 25, 36]
>>> sorted(my_list, key=square*sub3)
[3, 2, 4, 1, 5, 0, 6, 7, 8, 9]
But I don't want a Hack callable class! The scoping rules are different in ways I don't fully understand, and it's arguably even uglier than just using the "lameda". Is it possible to get composition working directly with functions somehow?
You can use your hack class as a decorator pretty much as it's written, though you'd likely want to choose a more appropriate name for the class.
Like this:
class Composable(object):
def __init__(self, function):
self.function = function
def __call__(self, *args, **kwargs):
return self.function(*args, **kwargs)
def __mul__(self, other):
#Composable
def composed(*args, **kwargs):
return self.function(other(*args, **kwargs))
return composed
def __rmul__(self, other):
#Composable
def composed(*args, **kwargs):
return other(self.function(*args, **kwargs))
return composed
You can then decorate your functions like so:
#Composable
def sub3(n):
return n - 3
#Composable
def square(n):
return n * n
And compose them like so:
(square * sub3)(n)
Basically it's the same thing you've accomplished using your hack class, but using it as a decorator.
Python does not (and likely will never) have support for function composition either at the syntactic level or as a standard library function. There are various 3rd party modules (such as functional) that provide a higher-order function that implements function composition.
Maybe something like this:
class Composition(object):
def __init__(self, *args):
self.functions = args
def __call__(self, arg):
result = arg
for f in reversed(self.functions):
result = f(result)
return result
And then:
sorted(my_list, key=Composition(square, sub3))
You can compose functions using SSPipe library:
from sspipe import p, px
sub3 = px - 3
square = px * px
composed = sub3 | square
print(5 | composed)

Python yield a list with generator

I was getting confused by the purpose of "return" and "yield"
def countMoreThanOne():
return (yy for yy in xrange(1,10,2))
def countMoreThanOne():
yield (yy for yy in xrange(1,10,2))
What is the difference on the above function?
Is it impossible to access the content inside the function using yield?
In first you return a generator
from itertools import chain
def countMoreThanOne():
return (yy for yy in xrange(1,10,2))
print list(countMoreThanOne())
>>>
[1, 3, 5, 7, 9]
while in this you are yielding a generator so that a generator within the generator
def countMoreThanOne():
yield (yy for yy in xrange(1,10,2))
print list(countMoreThanOne())
print list(chain.from_iterable(countMoreThanOne()))
[<generator object <genexpr> at 0x7f0fd85c8f00>]
[1, 3, 5, 7, 9]
if you use list comprehension then difference can be clearly seen:-
in first:-
def countMoreThanOne():
return [yy for yy in xrange(1,10,2)]
print countMoreThanOne()
>>>
[1, 3, 5, 7, 9]
def countMoreThanOne1():
yield [yy for yy in xrange(1,10,2)]
print countMoreThanOne1()
<generator object countMoreThanOne1 at 0x7fca33f70eb0>
>>>
After reading your other comments I think you should write the function like this:
def countMoreThanOne():
return xrange(1, 10, 2)
>>> print countMoreThanOne()
xrange(1, 11, 2)
>>> print list(countMoreThanOne())
[1, 3, 5, 7, 9]
or even better, to have some point in making it a function:
def oddNumbersLessThan(stop):
return xrange(1, stop, 2)
>>> print list(oddNumbersLessThan(15))
[1, 3, 5, 7, 9, 11, 13]

Categories

Resources