How do Python generator functions maintain local state? - python

According to the docs at https://docs.python.org/2/reference/simple_stmts.html#yield,
all local state is retained, including the current bindings of local variables, the instruction pointer, and the internal evaluation stack: enough information is saved so that the next time next() is invoked, the function can proceed exactly as if the yield statement were just another external call.
Here's a simple case:
def generator():
my_list = range(10)
print "my_list got assigned"
for i in my_list:
print i
yield
return
In the shell, generator() behaves like this:
>>>>generator().next()
my_list got assigned
0
>>>>generator().next()
my_list got assigned
0
I would have thought that my_list would not get reassigned each time .next() is called. Can someone explain why this happens, and why it seems like the docs contradict this?

You are creating a new generator object each time. Create one instance:
g = generator()
g.next()
g.next()
Here g references the generator object that maintains the state:
>>> def generator():
... my_list = range(10)
... print "my_list got assigned"
... for i in my_list:
... print i
... yield
... return
...
>>> g = generator()
>>> g
<generator object generator at 0x100633f50>
>>> g.next()
my_list got assigned
0
>>> g.next()
1

Yes, the generator maintains state, as you correctly found in the documentation. The problem is that in your example, you're creating two generators. The first one is not assigned to any variable, so it's discarded immediately after .next() completes. Then you create the second generator, with its own local state, that starts at the beginning.
Try this:
>>> mygen = generator()
>>> mygen.next()
my_list got assigned
0
>>> mygen.next()
1

You're creating a new instance of generator when you call generator().
If you instead did
my_generator = generator()
my_generator.next() # my_list got assigned, 0
my_generator.next() # 1
the list would only be assigned once.

Related

Forcing a list out of a generator

>>>def change(x):
... x.append(len(x))
... return x
>>>a=[]
>>>b=(change(a) for i in range(3))
>>>next(b)
[0]
>>>next(b)
[0,1]
>>>next(b)
[0,1,2]
>>>next(b)
Traceback ... StopIteration
>>>a=[]
>>>b=(change(a) for i in range(3))
>>>list(b) #expecting [[0],[0,1],[0,1,2]]
[[0,1,2],[0,1,2],[0,1,2]]
So I was just testing my understanding of generators and messing around with the command prompt and now I'm unsure if I actually understand how generators work.
The problem is that all calls to change(a) return the same object (in this case, the object is the value of a), but this object is mutable and changes its value. An example of the same problem without using generators:
a = []
b = []
for i in range(3):
a.append(len(a))
b.append(a)
print b
If you want to avoid it, you need to make a copy of your object (for example, make change return x[:] instead of x).

Python dynamic function attribute

I came across an interesting issue while trying to achieve dynamic sort.
Given the following code:
>>> l = []
>>> for i in range(2):
>>> def f():
>>> return f.v
>>> f.v = i
>>> l.append(f)
You have to be careful about how to use the functions in l:
>>> l[0]()
1
>>> l[1]()
1
>>> [h() for h in l]
[1, 1]
>>> [f() for f in l]
[0, 1]
>>> f = l[0]
>>> f()
0
>>> k = l[1]
>>> k()
0
>>> f = l[1]
>>> k()
1
>>> del f
>>> k()
NameError: global name 'f' is not defined
The behavior of the function depends on what f currently is.
What should I do to avoid this issue? How can I set a function attribute that does not depends on the function's name?
Update
Reading your comments and answers, here is my actual problem.
I have some data that I want to sort according to user input (so I don't know sorting criteria in advance). User can choose on which part of the data to apply successive sorts, and these sorts can be ascending or descending.
So my first try was to loop over the user inputs, define a function for each criterion, store this function in a list and then use this list for sorted's key like this: key=lambda x: [f(x) for f in functions]. To avoid multiplying conditions into functions themselves, I was computing some needed values before the function definition and binding them to the function (different functions with different pre-computed values).
While debugging, I understood that function attribute was not the solution here, so I indeed wrote a class with a __call__ method.
The issue is due to the fact that return f.v loads the global f, and not the one you intend.1 You can see this by disassembling the code:
>>> dis.dis(l[0])
3 0 LOAD_GLOBAL 0 (f)
3 LOAD_ATTR 1 (v)
6 RETURN_VALUE
After the loop that populates l, f is a reference to the last closure created, as you can see here:
>>> l
[<function f at 0x02594170>, <function f at 0x02594130>]
>>> f
<function f at 0x02594130>
Thus, when you call l[0](), it still loads the f that points to the last function created, and it returns 1. When you redefined f by doing f = l[0], then the global f now points to the first function.
What you seem to want is a function that has a state, which really is a class. You could therefore do something like this:
class MyFunction:
def __init__(self, v):
self.v = v
def __call__(self):
return self.v
l = [MyFunction(i) for i in range(2)]
l[0]() # 0
l[1]() # 1
Though it may be a good idea to explain your actual problem first, as there might be a better solution.
1: Why doesn't it load the global f and not the current instance, you may ask?
Recall that when you create a class, you need to pass a self argument, like so:
# ...
def my_method(self):
return self.value
self is actually a reference to the current instance of your object. That's how Python knows where to load the attribute value. It knows it has to look into the instance referenced by self. So when you do:
a.value = 1
a.my_method()
self is now a reference to a.
So when you do:
def f():
return f.v
There's no way for Python to know what f actually is. It's not a parameter, so it has to load it from elsewhere. In your case, it's loaded from the global variables.
Thus, when you do f.v = i, while you do set an attribute v for the instance of f, there's no way to know which instance you are referring to in the body of your function.
Note that what you are doing here:
def f():
return f.v
is not making a function which returns whatever its own v attribute is. It's returning whatever the f object's v attribute is. So it necessarily depends on the value of f. It's not that your v attribute "depends on the function's name". It really has nothing at all to do with the function's name.
Later, when you do
>>> f = l[0]
>>> k = l[1]
>>> k()
0
What you have done is bound k to the function at l[1]. When you call it, you of course get f.v, because that's what the function does.
But notice:
>>> k.v
1
>>> [h.v for h in l]
[0, 1]
So, a function is an object, and just like most objects, it can have attributes assigned to it (which you can access using dot notation, or the getattr() function, or inspecting the object's dictionary, etc.). But a function is not designed to access its own attributes from within its own code. For that, you want to use a class (as demonstrated by #VincentSavard).
In your particular case, the effect you seem to be after doesn't really need an "attribute" per se; you are apparently looking for a closure. You can implement a closure using a class, but a lighter-weight way is a nested function (one form of which is demonstrated by #TomKarzes; you could also use a named inner function instead of lambda).
Try this:
l = []
for i in range(2):
def f(n):
return lambda: n
l.append(f(i))
This doesn't use attributes, but creates a closure for each value of i. The value of n is then locked once f returns. Here's some sample output:
>>> [f() for f in l]
[0, 1]
As others said, return f.v looks for f name in the current scope which is equal to the last defined function.
To work around this you can simulate functions:
>>> class Function(object):
... def __init__(self, return_value):
... self.return_value = return_value
... def __call__(self):
... return self.return_value
...
>>> l = []
>>> for i in range(2):
... l.append(Function(i))
...
>>> l[0]()
>>> 0
>>> l[1]()
>>> 1

yield statement in myhdl

I have the following code in my myhdl environment:
def rst(self):
rst.next=rst.active
self.wait_clks(5)
def wait_clks(self, cycles):
for _ in range(cycles):
yield self.clk.posedge
the above code doesn't work but when I replace it with the following it works:
def rst(self):
rst.next=rst.active
for _ in range(5):
yield self.clk.posedge
I am confused over this, if anyone can explain why the yield in the function definition doesn't work?
When you simply call a generator function (one that has yield statement in its body) , you get a generator object, it does not even start going through the function at that point, it only starts that when you start iterating over the returned generator object (or call next() on it). Example -
>>> def gen1():
... print("Starting")
... for i in range(10):
... yield i
...
>>> g = gen1()
>>> g
<generator object gen1 at 0x00273E68>
As you can see above, it did not start going through the function, it just returned the generator object. To go through the function you need to iterate over g or call next() on it. Example -
>>> g.next()
Starting
0
>>> for i in g:
... print i
...
1
2
.
.
In your first case as well something similar is happenning, you are just calling the generator function, which returns the generator object, and then discarding the result. Most probably, from wherever rst() is called , it is expecting a generator object in return. In which case your second method is the best.
But if you really really want to make it in a separate function (and I do not see any need to make it in a separate method) , you can directly return the result of self.wait_clks(5) back from rst(self) . Example -
def rst(self):
rst.next=reset.active
return self.wait_clks(5)
Example to show that this works -
>>> def f():
... return gen1()
...
>>> for i in f():
... print(i)
...
Starting
0
1
2
.
.
As described by Anand you can't simply call a generator, in this case if you yield the generator you will get what you expect"
def rst(self):
rst.next=rst.active
yield self.wait_clks(5)

Python closures using lambda

I saw this below piece of code in a tutorial and wondering how it works.
Generally, the lambda takes a input and returns something but here it does not take anything and still it works.
>>> for i in range(3):
... a.append(lambda:i)
...
>>> a
[<function <lambda> at 0x028930B0>, <function <lambda> at 0x02893030>, <function
<lambda> at 0x028930F0>]
lambda:i defines the constant function that returns i.
Try this:
>>> f = lambda:3
>>> f()
You get the value 3.
But there's something more going on. Try this:
>>> a = 4
>>> g = lambda:a
>>> g()
gives you 4. But after a = 5, g() returns 5. Python functions "remember" the environment in which they're executed. This environment is called a "closure". By modifying the data in the closure (e.g. the variable a in the second example) you can change the behavior of the functions defined in that closure.
In this case a is a list of function objects defined in the loop.
Each of which will return 2.
>>> a[0]()
2
To make these function objects remember i values sequentially you should rewrite the code to
>>> for i in range(3):
... a.append(lambda x=i:x)
...
that will give you
>>> a[0]()
0
>>> a[1]()
1
>>> a[2]()
2
but in this case you get side effect that allows you to not to use remembered value
>>> a[0](42)
42
I'm not sure what you mean by "it works". It appears that it doesn't work at all. In the case you have presented, i is a global variable. It changes every time the loop iterates, so after the loop, i == 2. Now, since each lambda function simply says lambda:i each function call will simply return the most recent value of i. For example:
>>> a = []
>>> for i in range(3):
a.append(lambda:1)
>>> print a[0]()
2
>>> print a[1]()
2
>>> print a[2]()
In other words, this does not likely do what you expect it to do.
lambda defines an anonymous inline function. These functions are limited compared to the full functions you can define with def - they can't do assignments, and they just return a result. However, you can run into interesting issues with them, as defining an ordinary function inside a loop is not common, but lambda functions are often put into loops. This can create closure issues.
The following:
>>> a = []
>>> for i in range(3):
... a.append(lambda:i)
adds three functions (which are first-class objects in Python) to a. These functions return the value of i. However, they use the definition of i as it existed at the end of the loop. Therefore, you can call any of these functions:
>>> a[0]()
2
>>> a[1]()
2
>>> a[2]()
2
and they will each return 2, the last iteration of the range object. If you want each to return a different number, use a default argument:
>>> for i in range(3):
... a.append(lambda i=i:i)
This will forcibly give each function an i as it was at that specific point during execution.
>>> a[0]()
0
>>> a[1]()
1
>>> a[2]()
2
Of course, since we're now able to pass an argument to that function, we can do this:
>>> b[0](5)
5
>>> b[0](range(3))
range(0, 3)
It all depends on what you're planning to do with it.

Attempting to understand yield as an expression

I'm playing around with generators and generator expressions and I'm not completely sure that I understand how they work (some reference material):
>>> a = (x for x in range(10))
>>> next(a)
0
>>> next(a)
1
>>> a.send(-1)
2
>>> next(a)
3
So it looks like generator.send was ignored. That makes sense (I guess) because there is no explicit yield expression to catch the sent information ...
However,
>>> a = ((yield x) for x in range(10))
>>> next(a)
0
>>> print next(a)
None
>>> print next(a)
1
>>> print next(a)
None
>>> a.send(-1) #this send is ignored, Why? ... there's a yield to catch it...
2
>>> print next(a)
None
>>> print next(a)
3
>>> a.send(-1) #this send isn't ignored
-1
I understand this is pretty far out there, and I (currently) can't think of a use-case for this (so don't ask;)
I'm mostly just exploring to try to figure out how these various generator methods work (and how generator expressions work in general). Why does my second example alternate between yielding a sensible value and None? Also, Can anyone explain why one of my generator.send's was ignored while the other wasn't?
The confusion here is that the generator expression is doing a hidden yield. Here it is in function form:
def foo():
for x in range(10):
yield (yield x)
When you do a .send(), what happens is the inner yield x gets executed, which yields x. Then the expression evaluates to the value of the .send, and the next yield yields that. Here it is in clearer form:
def foo():
for x in range(10):
sent_value = (yield x)
yield sent_value
Thus the output is very predictable:
>>> a = foo()
#start it off
>>> a.next()
0
#execution has now paused at "sent_value = ?"
#now we fill in the "?". whatever we send here will be immediately yielded.
>>> a.send("yieldnow")
'yieldnow'
#execution is now paused at the 'yield sent_value' expression
#as this is not assigned to anything, whatever is sent now will be lost
>>> a.send("this is lost")
1
#now we're back where we were at the 'yieldnow' point of the code
>>> a.send("yieldnow")
'yieldnow'
#etc, the loop continues
>>> a.send("this is lost")
2
>>> a.send("yieldnow")
'yieldnow'
>>> a.send("this is lost")
3
>>> a.send("yieldnow")
'yieldnow'
EDIT: Example usage. By far the coolest one I've seen so far is twisted's inlineCallbacks function. See here for an article explaining it. The nub of it is it lets you yield functions to be run in threads, and once the functions are done, twisted sends the result of the function back into your code. Thus you can write code that heavily relies on threads in a very linear and intuitive manner, instead of having to write tons of little functions all over the place.
See the PEP 342 for more info on the rationale of having .send work with potential use cases (the twisted example I provided is an example of the boon to asynchronous I/O this change offered).
You're confusing yourself a bit because you actually are generating from two sources: the generator expression (... for x in range(10)) is one generator, but you create another source with the yield. You can see that if do list(a) you'll get [0, None, 1, None, 2, None, 3, None, 4, None, 5, None, 6, None, 7, None, 8, None, 9, None].
Your code is equivalent to this:
>>> def gen():
... for x in range(10):
... yield (yield x)
Only the inner yield ("yield x") is "used" in the generator --- it is used as the value of the outer yield. So this generator iterates back and forth between yielding values of the range, and yielding whatever is "sent" to those yields. If you send something to the inner yield, you get it back, but if you happen to send on an even-numbered iteration, the send is sent to the outer yield and is ignored.
This generator translates into:
for i in xrange(10):
x = (yield i)
yield x
Result of second call to send()/next() are ignored, because you do nothing with result of one of yields.
The generator you wrote is equivalent to the more verbose:
def testing():
for x in range(10):
x = (yield x)
yield x
As you can see here, the second yield, which is implicit in the generator expression, does not save the value you pass it, therefore depending on where the generator execution is blocked the send may or may not work.
Indeed - the send method is meant to work with a generator object that is the result of a co-routine you have explicitly written. It is difficult to get some meaning to it in a generator expression - though it works.
-- EDIT --
I had previously written this, but it is incorrecct, as yield inside generator expressions are predictable across implementations - though not mentioned in any PEP.
generator expressions are not meant to have the yield keyword - I am
not shure the behavior is even defined in this case. We could think a
little and get to what is happening on your expression, to meet from
where those "None"s are coming from. However, assume that as a side
effect of how the yield is implemented in Python (and probably it is
even implementation dependent), not as something that should be so.
The correct form for a generator expression, in a simplified manner is:
(<expr> for <variable> in <sequence> [if <expr>])
so, <expr> is evaluated for each value in the <sequence: - not only is yield uneeded, as you should not use it.
Both yield and the send methods are meant to be used in full co-routines, something like:
def doubler():
value = 0
while value < 100:
value = 2 * (yield value)
And you can use it like:
>>> a = doubler()
>>> # Next have to be called once, so the code will run up to the first "yield"
...
>>> a.next()
0
>>> a.send(10)
20
>>> a.send(20)
40
>>> a.send(23)
46
>>> a.send(51)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>>

Categories

Resources