I have two functions that return a list of functions. The functions take in a number x and add i to it. i is an integer increasing from 0-9.
def test_without_closure():
return [lambda x: x+i for i in range(10)]
def test_with_yield():
for i in range(10):
yield lambda x: x+i
I would expect test_without_closure to return a list of 10 functions that each add 9 to x since i's value is 9.
print sum(t(1) for t in test_without_closure()) # prints 100
I expected that test_with_yield would also have the same behavior, but it correctly creates the 10 functions.
print sum(t(1) for t in test_with_yield()) # print 55
My question is, does yielding form a closure in Python?
Yielding does not create a closure in Python, lambdas create a closure. The reason that you get all 9s in "test_without_closure" isn't that there's no closure. If there weren't, you wouldn't be able to access i at all. The problem is that all closures contain a reference¹ to the same i variable, which will be 9 at the end of the function.
This situation isn't much different in test_with_yield. Why, then, do you get different results? Because yield suspends the run of the function, so it's possible to use the yielded lambdas before the end of the function is reached, i.e. before i is 9. To see what this means, consider the following two examples of using test_with_yield:
[f(0) for f in test_with_yield()]
# Result: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[f(0) for f in list(test_with_yield())]
# Result: [9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
What's happening here is that the first example yields a lambda (while i is 0), calls it (i is still 0), then advances the function until another lambda is yielded (i is now 1), calls the lambda, and so on. The important thing is that each lambda is called before the control flow returns to test_with_yield (i.e. before the value of i changes).
In the second example, we first create a list. So the first lambda is yielded (i is 0) and put into the list, the second lambda is created (i is now 1) and put into the list ... until the last lambda is yielded (i is now 9) and put into the list. And then we start calling the lambdas. So since i is now 9, all lambdas return 9.
¹ The important bit here is that closures hold references to variables, not copies of the value they held when the closure was created. This way, if you assign to the variable inside a lambda (or inner function, which create closures the same way that lambdas do), this will also change the variable outside of the lambda and if you change the value outside, that change will be visible inside the lambda.
No, yielding has nothing to do with closures.
Here is how to recognize closures in Python: a closure is
a function
in which an unqualified name lookup is performed
no binding of the name exists in the function itself
but a binding of the name exists in the local scope of a function whose definition surrounds the definition of the function in which the name is looked up.
The reason for the difference in behaviour you observe is laziness, rather than anything to do with closures. Compare and contrast the following
def lazy():
return ( lambda x: x+i for i in range(10) )
def immediate():
return [ lambda x: x+i for i in range(10) ]
def also_lazy():
for i in range(10):
yield lambda x:x+i
not_lazy_any_more = list(also_lazy())
print( [ f(10) for f in lazy() ] ) # 10 -> 19
print( [ f(10) for f in immediate() ] ) # all 19
print( [ f(10) for f in also_lazy() ] ) # 10 -> 19
print( [ f(10) for f in not_lazy_any_more ] ) # all 19
Notice that the first and third examples give identical results, as do the second and the fourth. The first and third are lazy, the second and fourth are not.
Note that all four examples provide a bunch of closures over the most recent binding of i, it's just that in the first an third case you evaluate the closures before rebinding i (even before you've created the next closure in the sequence), while in the second and fourth case, you first wait until i has been rebound to 9 (after you've created and collected all the closures you are going to make), and only then evaluate the closures.
Adding to #sepp2k's answer you're seeing these two different behaviours because the lambda functions being created don't know from where they have to get i's value. At the time this function is created all it knows is that it has to either fetch i's value from either local scope, enclosed scope, global scope or builtins.
In this particular case it is a closure variable(enclosed scope). And its value is changing with each iteration.
Check out LEGB in Python.
Now to why second one works as expected but not the first one?
It's because each time you're yielding a lambda function the execution of the generator function stops at that moment and when you're invoking it and it will use the value of i at that moment. But in the first case we have already advanced i's value to 9 before we invoked any of the functions.
To prove it you can fetch current value of i from the __closure__'s cell contents:
>>> for func in test_with_yield():
print "Current value of i is {}".format(func.__closure__[0].cell_contents)
print func(9)
...
Current value of i is 0
Current value of i is 1
Current value of i is 2
Current value of i is 3
Current value of i is 4
Current value of i is 5
Current value of i is 6
...
But instead if you store the functions somewhere and call them later then you will see the same behaviour as the first time:
from itertools import islice
funcs = []
for func in islice(test_with_yield(), 4):
print "Current value of i is {}".format(func.__closure__[0].cell_contents)
funcs.append(func)
print '-' * 20
for func in funcs:
print "Now value of i is {}".format(func.__closure__[0].cell_contents)
Output:
Current value of i is 0
Current value of i is 1
Current value of i is 2
Current value of i is 3
--------------------
Now value of i is 3
Now value of i is 3
Now value of i is 3
Now value of i is 3
Example used by Patrick Haugh in comments also shows the same thing: sum(t(1) for t in list(test_with_yield()))
Correct way:
Assign i as a default value to lambda, default values are calculated when function is created and they won't change(unless it's a mutable object). i is now a local variable to the lambda functions.
>>> def test_without_closure():
return [lambda x, i=i: x+i for i in range(10)]
...
>>> sum(t(1) for t in test_without_closure())
55
Related
This question already has answers here:
Creating functions (or lambdas) in a loop (or comprehension)
(6 answers)
Lambda in a loop [duplicate]
(4 answers)
Closed last year.
I am trying to generate a list of lambdas that I will later apply to an object, but when I try to do it via a comprehension or a loop over a list, the reference to the variable is kept, rather than the value itself. Let me illustrate.
Assume your object class is something like this:
class Object:
def function(self, x):
print(x)
So when you create the object and invoke it you get something like this:
o = Object()
o.function(0)
>>> 0
Now, if I manually construct my list of lambdas it would look like this:
lambdas = [
lambda x: x.function(0),
lambda x: x.function(1),
lambda x: x.function(2)
]
Which I can then apply to my previously created object:
for l in lambdas:
l(o)
>>> 0
>>> 1
>>> 2
However, when I generate the lambda list from another list, I only get the reference to the latest element of the list:
lambdas = [lambda x: x.function(i) for i in range(2)]
for l in lambdas:
l(o)
>>> 2
>>> 2
>>> 2
On closer inspection I can see that each lambda has a different memory address, so they are NOT references to the same function.
So I can only assume that the lambda is keeping a reference to i which has a final value of 2 and therefore when invoked, it takes the value.
So my question is if its possible to set the value of the variable inside the lambda before invocation?
Note: The usa case for a list of lambdas is to pass to the agg function of a Pandas groupby on a DataFrame. I am not looking for a solution to the pandas problem, but curious about the general solution.
Generator Option
Just change lambdas to a generator instead of a list, this will cause it redefine i on every call:
lambdas = (lambda x: x.function(i) for i in range(2))
for l in lambdas:
print(l(o))
Full code:
class Object:
def function(self, x):
print(x)
o = Object()
o.function(0) #manual call
lambdas = (lambda x: x.function(i) for i in range(2))
for l in lambdas:
l(o)
Output:
0 #output from manual call
0 #output from generator
1 #output from generator
List Option
If you need a list for things like lambdas[0](o) you can send i to lambda each iteration by using i=i like so:
lambdas = [lambda x, i=i: x.function(i) for i in range(2)]
Example of second option:
class Object:
def function(self, x):
print(x)
o = Object()
lambdas = [lambda x, i=i: x.function(i) for i in range(2)] #notice the cahnge
for i in range(len(lambdas)):
lambdas[i](o) #notice the change
Output:
0
1
What takes place is that in this expression, the "living" (nonlocal) i variable is
used inside each lambda created. And at the end of the for loop, its value is the last value taken - which will be used when the lambdas are actually called.
lambdas = [lambda x: x.function(i) for i in range(2)]
The fix for that is to create an intermediary namespace which will "freeze" the nonlocal variable value at the time the lambda is created. This is usually done with another lambda:
lambdas = [(lambda i: (lambda x: x.function(i)))(i) for i in range(2)]
So, bear with me - in the above expression, for each execution of the for i loop, a new, disposable lambda i is created and called imediatelly with the current value of the i used in the for. Inside it, this value is bound to a local i variable, that is unique to this disposable lambda i (in Python internal workings, it gets its own "cell"). This unique iis then used in the second, permanent, lambda x expression. Whenever that one is called, it will use the i value persisted in the outter lambda i call. The external lambda i then returns the lambda x expression as its result, but its nonlocal i is bound to the value used inside the lambda i, not the one used in the for i.
This is a common problem in Python, but can't be fixed because it is part of how the language works.
There is a shorter, and working, form to "freeze" the i from for i when each lambda i is created, that does not require an outer function scope: when a function is created, the values passed as default for its parameters are stored along with the function. Then, if one stores the current value of i as a default value, it won't change when the variable i itself does:
lambdas = [lambda x, i=i: x.function(i) for i in range(2)]
Here, in the lambda x, i=i: snippet, the value of i in the scope the lambda is created is stored as the default value for the parameter i, which works as a local (in contrast with a nonlocal) variable inside the lambda function itself.
The answer in this post details nicely how python inner functions don't use the value of closure variables until the inner function actually executes, finding the variable name in the proper scope.
For example:
funcs = [(lambda: x) for x in range(3)]
Calling any of the generated lambdas returns 2:
>>> funcs[0]()
2
>>> funcs[1]()
2
>>> funcs[2]()
2
Is there a way to force the value for x to be determined when the function is defined instead of when it is executed later? In the above example, my desired output is 0, 1, 2, respectively.
More specifically, my use-case is to provide a way for API users to conveniently turn a custom function into a thread using a decorator. For example:
for idx in range(3):
#thread_this(name=f'thread_{idx}')
def custom():
do_something()
print(f'thread_{idx} complete.')
When the final print statement executes, it picks up whatever the current value of idx is in the global scope. With appropriate sleep statements, all 3 threads will print 'thread_2 complete.'
You can use functools.partial, first problem can be solved with,
funcs = [functools.partial(lambda x: x, x) for x in xrange(3)]
It will give you desired result.
However, I could not understand the second usecase.
I have come across this example from Python hitchhikers guide:
def create_multipliers():
return [lambda x, i=i : i * x for i in range(5)]
The example above is the solution to some issues caused with late binding, where variables used in closures are looked up at the time the inner function is called.
What does the i=i mean and why is it making such difference?
It's actually not just for lambdas; any function that takes default parameters will use the same syntax. For example
def my_range(start, end, increment=1):
ans = []
while start < end:
ans.append(start)
start += increment
return ans
(This is not actually how range works, I just thought it would be a simple example to understand). In this case, you can call my_range(5,10) and you will get [5,6,7,8,9]. But you can also call my_range(5,10,increment=2), which will give you [5, 7, 9].
You can get some surprising results with default arguments. As this excellent post describes, the argument is bound at function definition, not at function invocation as you might expect. That causes some strange behavior, but it actually helps us here. Consider the incorrect code provided in your link:
def create_multipliers():
return [lambda x : i * x for i in range(5)]
for multiplier in create_multipliers():
print multiplier(2)
When you call multiplier(2), what is it actually doing? It's taking your input parameter, 2, and returning i * 2. But what is i? The function doesn't have any variable called i in its own scope, so it checks the surrounding scope. In the surrounding scope, the value of i is just whatever value you left it -- in this case 4. So every function gives you 8.
On the other hand, if you provide a default parameter, the function has a variable called i in its own scope. What's the value of i? Well, you didn't provide one, so it uses its default value, which was bound when the function was defined. And when the function was defined, i had a different value for each of the functions in your list!
It is a bit confusing that they've used the same name for the parameter variable as they did for the iterating variable. I suspect you could get the same result with greater readability with
def create_multipliers():
return [(lambda x, y=i: y*x) for i in range(5)]
In that case, each number in the range, will be assigned to the optional parameters of each lambda function:
def create_multipliers():
return [lambda x, i=i : i * x for i in range(5)]
lambda x, i=0
lambda x, i=1
lambda x, i=2
lambda x, i=3
lambda x, i=4
So, you can call the functions now with one parameter (because they already have the default)
for f in create_multipliers():
print(f(3))
0
3
6
9
12
Or you can call the function and give the parameter you want, that's why is optional
for f in create_multipliers():
print(f(3,2))
6
6
6
6
6
There are examples where optional parameter are needed, such as recursion
For example, square in terms of square:
square = lambda n, m=0: 0 if n==m else n+square(n,m+1)
Look that the optional parameter there is used as accumulator
I am asking because of the classic problem where somebody creates a list of lambdas:
foo = []
for i in range(3):
foo.append((lambda: i))
for l in foo:
print(l())
and unexpectedly gets only twos as output.
The commonly proposed solution is to make i a named argument like this:
foo = []
for i in range(3):
foo.append((lambda i=i: i))
for l in foo:
print(l())
Which produces the desired output of 0, 1, 2 but now something magical has happened. It sort of did what is expected because Python is pass-by-reference and you didn't want a reference.
Still, just adding a new name to something, shouldn't that just create another reference?
So the question becomes what are the exact rules for when something is not a reference?
Considering that ints are immutable and the following works:
x = 3
y = x
x = 5
print(x, y) // outputs 5 3
probably explains why adding that named parameter works. A local i with the same value was created and captured.
Now why, in the case of our lambdas was the same i referenced? I pass an int to function and it is refenced and if I store it in a variable it is copied. Hm.
Basically I am looking for the most concise and abstract way possible to remember exactly how this works. When is the same value referenced, when do I get a copy. If it has any common names and there are programming languages were it works the same that would be interesting as well.
Here is my current assumption:
Arguments are always passed to functions by reference.
Assigning to a variable of immutable type creates a copy.
I am asking anyway, just to make sure and hopefully get some background.
The issue here is how you think of names.
In your first example, i is a variable that is assigned to every time the loop iterates. When you use lambda to make a function, you make a function that accesses the name i and returns it's value. This means as the name i changes, the value returned by the functions also changes.
The reason the default argument trick works is that the name is evaluated when the function is defined. This means the default value is the value the i name points to at that time, not the name itself.
i is a label. 0, 1 and 2 are the objects. In the first case, the program assigns 0 to i, then makes a function that returns i - it then does this with 1 and 2. When the function is called, it looks up i (which is now 2) and then returns it.
In the second example, you assign 0 to i, then you make a function with a default argument. That default argument is the value that is gotten by evaluating i - that is the object 0. This is repeated for 1 and 2. When the function is called, it assigns that default value to a new variable i, local to the function and unrelated to the outer i.
Python doesn't exactly pass by reference or by value (at least, not the way you'd think of it, coming from a language like C++).
In many other languages (such as C++), variables can be thought of as synonymous with the values they hold.
However, in Python, variables are names that point to the objects in memory.
(This is a good explanation (with pictures!))
Because of this, you can get multiple names attached to one object, which can lead to interesting effects.
Consider these equivalent program snippets:
// C++:
int x;
x = 10; // line A
x = 20; // line B
and
# Python:
x = 10 # line C
x = 20 # line D
After line A, the int 10 is stored in memory, say, at the memory address 0x1111.
After line B, the memory at 0x1111 is overwritten, so 0x1111 now holds the int 20
However, the way this program works in python is quite different:
After line C, x points to some memory, say, 0x2222, and the value stored at 0x2222 is 10
After line D, x points to some different memory, say, 0x3333, and the value stored at 0x3333 is 20
Eventually, the orphaned memory at 0x2222 is garbage collected by Python.
Hopefully this helps you get a grasp of the subtle differences between variables in Python and most other languages.
(I know I didn't directly answer your question about lambdas, but I think this is good background knowledge to have before reading one of the good explanations here, such as #Lattyware's)
See this question for some more background info.
Here's some final background info, in the form of oft-quoted but instructive examples:
print 'Example 1: Expected:'
x = 3
y = x
x = 2
print 'x =', x
print 'y =', y
print 'Example 2: Surprising:'
x = [3]
y = x
x[0] = 2
print 'x =', x
print 'y =', y
print 'Example 3: Same logic as in Example 1:'
x = [3]
y = x
x = [2]
print 'x =', x
print 'y =', y
The output is:
Example 1: Expected:
x = 2
y = 3
Example 2: Surprising:
x = [2]
y = [2]
Example 3: Same logic as in Example 1:
x = [2]
y = [3]
foo = []
for i in range(3):
foo.append((lambda: i))
Here since all the lambda's were created in the same scope so all of them point to the same global variable variable i. so, whatever value i points to will be returned when they are actually called.
foo = []
for i in range(3):
foo.append((lambda z = i: id(z)))
print id(i) #165618436
print(foo[-1]()) #165618436
Here in each loop we assign the value of i to a local variable z, as default arguments are calculated when the function is parsed so the value z simply points to the values stored by i during the iteration.
Arguments are always passed to functions by reference?
In fact the z in foo[-1] still points to the same object as i of the last iteration, so yes values are passed by reference but as integers are immutable so changing i won't affect z of the foo[-1] at all.
In the example below all lambda's point to some mutable object, so modifying items in lis will also affect the functions in foo:
foo = []
lis = ([], [], [])
for i in lis:
foo.append((lambda z = i: z))
lis[0].append("bar")
print foo[0]() #prints ['bar']
i.append("foo") # `i` still points to lis[-1]
print foo[-1]() #prints ['foo']
Assigning to a variable of immutable type creates a copy?
No values are never copied.
>>> x = 1000
>>> y = x # x and y point to the same object, but an immutable object.
>>> x += 1 # so modifying x won't affect y at all, in fact after this step
# x now points to some different object and y still points to
# the same object 1000
>>> x #x now points to an new object, new id()
1001
>>> y #still points to the same object, same id()
1000
>>> x = []
>>> y = x
>>> x.append("foo") #modify an mutable object
>>> x,y #changes can be seen in all references to the object
(['foo'], ['foo'])
The list of lambdas problem arises because the i referred to in both snippets is the same variable.
Two distinct variables with the same name exist only if they exist in two separate scopes. See the following link for when that happens, but basically any new function (including a lambda) or class establishes its own scope, as do modules, and pretty much nothing else does. See: http://docs.python.org/2/reference/executionmodel.html#naming-and-binding
HOWEVER, when reading the value of a variable, if it is not defined in the current local scope, the enclosing local scopes are searched*. Your first example is of exactly this behaviour:
foo = []
for i in range(3):
foo.append((lambda: i))
for l in foo:
print(l())
Each lambda creates no variables at all, so its own local scope is empty. When execution hits the locally undefined i, it is located in the enclosing scope.
In your second example, each lambda creates its own i variable in the parameter list:
foo = []
for i in range(3):
foo.append((lambda i=i: i))
This is in fact equivalent to lambda a=i: a, because the i inside the body is the same as the i on the left hand side of the assignment, and not the i on the right hand side. The consequence is that i is not missing from the local scope, and so the value of the local i is used by each lambda.
Update: Both of your assumptions are incorrect.
Function arguments are passed by value. The value passed is the reference to the object. Pass-by-reference would allow the original variable to be altered.
No implicit copying ever occurs on function call or assignment, of any language-level object. Under the hood, because this is pass-by-value, the references to the parameter objects are copied when the function is called, as is usual in any language which passes references by value.
Update 2: The details of function evaluation are here: http://docs.python.org/2/reference/expressions.html#calls . See the link above for the details regarding name binding.
* No actual linear search occurs in CPython, because the correct variable to use can be determined at compile time.
The answer is that the references created in a closure (where a function is inside a function, and the inner function accesses variables from the outer one) are special. This is an implementation detail, but in CPython the value is a particular kind of object called a cell and it allows the variable's value to be changed without rebinding it to a new object. More info here.
The way variables work in Python is actually rather simple.
All variables contain references to objects.
Reassigning a variable points it to a different object.
All arguments are passed by value when calling functions (though the values being passed are references).
Some types of objects are mutable, which means they can be changed without changing what any of their variable names point to. Only these types can be changed when passed, since this does not require changing any references to the object.
Values are never copied implicitly. Never.
The behaviour really has very little to do with how parameters are passed (which is always the same way; there is no distinction in Python where things are sometimes passed by reference and sometimes passed by value). Rather the problem is to do with how names themselves are found.
lambda: i
creates a function that is of course equivalent to:
def anonymous():
return i
That i is a name, within the scope of anonymous. But it's never bound within that scope (not even as a parameter). So for that to mean anything i must be a name from some outer scope. To find a suitable name i, Python will look at the scope in which anonymous was defined in the source code (and then similarly out from there), until it finds a definition for i.1
So this loop:
foo = []
for i in range(3):
foo.append((lambda: i))
for l in foo:
print(l())
Is almost exactly as if you had written this:
foo = []
for i in range(3):
def anonymous():
return i
foo.append(anonymous)
for l in foo:
print(l())
So that i in return i (or lambda: i) ends up being the same i from the outer scope, which is the loop variable. Not that they are all references to the same object, but that they are all the same name. So it's simply not possible for the functions stored in foo to return different values; they're all returning the object referred to by a single name.
To prove it, watch what happens when I remove the variable i after the loop:
>>> foo = []
>>> for i in range(3):
foo.append((lambda: i))
>>> del i
>>> for l in foo:
print(l())
Traceback (most recent call last):
File "<pyshell#7>", line 2, in <module>
print(l())
File "<pyshell#3>", line 2, in <lambda>
foo.append((lambda: i))
NameError: global name 'i' is not defined
You can see that the problem isn't that each function has a local i bound to the wrong thing, but rather than each function is returning the value of the same global variable, which I've now removed.
OTOH, when your loop looks like this:
foo = []
for i in range(3):
foo.append((lambda i=i: i))
for l in foo:
print(l())
That is quite like this:
foo = []
for i in range(3):
def anonymous(i=i):
return i
foo.append(anonymous)
for l in foo:
print(l())
Now the i in return i is not the same i as in the outer scope; it's a local variable of the function anonymous. A new function is created in each iteration of the loop (stored temporarily in the outer scope variable anonymous, and then permanently in a slot of foo), so each one has it's own local variables.
As each function is created, the default value of its parameter is set to the value of i (in the scope defining the functions). Like any other "read" of a variable, that pulls out whatever object is referenced by the variable at that time, and thereafter has no connection to the variable.2
So each function gets the default value of i as it is in the outer scope at the time it is created, and then when the function is called without an argument that default value becomes the value of the i in that function's local scope. Each function has no non-local references, so is completely unaffected by what happens outside it.
1 This is done at "compile time" (when the Python file is converted to bytecode), with no regard for what the system is like at runtime; it is almost literally looking for an outer def block with i = ... in the source code. So local variables are actually statically resolved! If that lookup chain falls all the way out to the module global scope, then Python assumes that i will be defined in the global scope at the point that the code will be run, and just treats i as a global variable whether or not there is a statically visible binding for i at module scope, hence why you can dynamically create global variables but not local ones.
2 Confusingly, this means that in lambda i=i: i, the three is refer to three completely different "variables" in two different scopes on the one line.
The leftmost i is the "name" holding the value that will be used for the default value of i, which exists independently of any particular call of the function; it's almost exactly "member data" stored in the function object.
The second i is an expression evaluated as the function is created, to get the default value. So the i=i bit acts very like an independent statement the_function.default_i = i, evaluated in the same scope containing the lambda expression.
And finally the third i is actually the local variable inside the function, which only exists within a call to the anonymous function.
I'm trying to return from a function a list of functions, each of which uses variables from the outside scope. This isn't working. Here's an example which demonstrates what's happening:
a = []
for i in range(10):
a.append(lambda x: x+i)
a[1](1) # returns 10, where it seems it should return 2
Why is this happening, and how can I get around it in python 2.7 ?
The i refers to the same variable each time, so i is 9 in all of the lambdas because that's the value of i at the end of the loop. Simplest workaround involves a default argument:
lambda x, i=i: x+i
This binds the value of the loop's i to a local variable i at the lambda's definition time.
Another workaround is to define a lambda that defines another lambda, and call the first lambda:
(lambda i: lambda x: x+i)(i)
This behavior makes a little more sense if you consider this:
def outerfunc():
def innerfunc():
return x+i
a = []
for i in range(10):
a.append(innerfunc)
return a
Here, innerfunc is defined once, so it makes intuitive sense that you are only working with a single function object, and you would not expect the loop to create ten different closures. With a lambda it doesn't look like the function is defined only once, it looks like you're defining it fresh each time through the loop, but in fact it is functionally the same as the the long version.
Because i isn't getting evaluated when you define the anonymous function (lambda expression) but when it's called. You can see this by adding del i before a[1](1): you'll get NameError: global name 'i' is not defined on the a[1](1) line.
You need to fix the value of i into the lambda expression every time, like so:
a = [lambda x, i=i: x+i for i in range(10)]
a[1](1) # returns 2
Another, more general solution - also without lambdas:
import operator
from functools import partial
a = []
for i in range(10):
a.append(partial(operator.add, i))
a[1][(1) # returns 2
The key aspect here is functools.partial.