yield statement in myhdl - python

I have the following code in my myhdl environment:
def rst(self):
rst.next=rst.active
self.wait_clks(5)
def wait_clks(self, cycles):
for _ in range(cycles):
yield self.clk.posedge
the above code doesn't work but when I replace it with the following it works:
def rst(self):
rst.next=rst.active
for _ in range(5):
yield self.clk.posedge
I am confused over this, if anyone can explain why the yield in the function definition doesn't work?

When you simply call a generator function (one that has yield statement in its body) , you get a generator object, it does not even start going through the function at that point, it only starts that when you start iterating over the returned generator object (or call next() on it). Example -
>>> def gen1():
... print("Starting")
... for i in range(10):
... yield i
...
>>> g = gen1()
>>> g
<generator object gen1 at 0x00273E68>
As you can see above, it did not start going through the function, it just returned the generator object. To go through the function you need to iterate over g or call next() on it. Example -
>>> g.next()
Starting
0
>>> for i in g:
... print i
...
1
2
.
.
In your first case as well something similar is happenning, you are just calling the generator function, which returns the generator object, and then discarding the result. Most probably, from wherever rst() is called , it is expecting a generator object in return. In which case your second method is the best.
But if you really really want to make it in a separate function (and I do not see any need to make it in a separate method) , you can directly return the result of self.wait_clks(5) back from rst(self) . Example -
def rst(self):
rst.next=reset.active
return self.wait_clks(5)
Example to show that this works -
>>> def f():
... return gen1()
...
>>> for i in f():
... print(i)
...
Starting
0
1
2
.
.

As described by Anand you can't simply call a generator, in this case if you yield the generator you will get what you expect"
def rst(self):
rst.next=rst.active
yield self.wait_clks(5)

Related

Python `yield from`, or return a generator?

I wrote this simple piece of code:
def mymap(func, *seq):
return (func(*args) for args in zip(*seq))
Should I use the 'return' statement as above to return a generator, or use a 'yield from' instruction like this:
def mymap(func, *seq):
yield from (func(*args) for args in zip(*seq))
and beyond the technical difference between 'return' and 'yield from', which is the better approach the in general case?
The difference is that your first mymap is just a usual function,
in this case a factory which returns a generator. Everything
inside the body gets executed as soon as you call the function.
def gen_factory(func, seq):
"""Generator factory returning a generator."""
# do stuff ... immediately when factory gets called
print("build generator & return")
return (func(*args) for args in seq)
The second mymap is also a factory, but it's also a generator
itself, yielding from a self-built sub-generator inside.
Because it is a generator itself, execution of the body does
not start until the first invokation of next(generator).
def gen_generator(func, seq):
"""Generator yielding from sub-generator inside."""
# do stuff ... first time when 'next' gets called
print("build generator & yield")
yield from (func(*args) for args in seq)
I think the following example will make it clearer.
We define data packages which shall be processed with functions,
bundled up in jobs we pass to the generators.
def add(a, b):
return a + b
def sqrt(a):
return a ** 0.5
data1 = [*zip(range(1, 5))] # [(1,), (2,), (3,), (4,)]
data2 = [(2, 1), (3, 1), (4, 1), (5, 1)]
job1 = (sqrt, data1)
job2 = (add, data2)
Now we run the following code inside an interactive shell like IPython to
see the different behavior. gen_factory immediately prints
out, while gen_generator only does so after next() being called.
gen_fac = gen_factory(*job1)
# build generator & return <-- printed immediately
next(gen_fac) # start
# Out: 1.0
[*gen_fac] # deplete rest of generator
# Out: [1.4142135623730951, 1.7320508075688772, 2.0]
gen_gen = gen_generator(*job1)
next(gen_gen) # start
# build generator & yield <-- printed with first next()
# Out: 1.0
[*gen_gen] # deplete rest of generator
# Out: [1.4142135623730951, 1.7320508075688772, 2.0]
To give you a more reasonable use case example for a construct
like gen_generator we'll extend it a little and make a coroutine
out of it by assigning yield to variables, so we can inject jobs
into the running generator with send().
Additionally we create a helper function which will run all tasks
inside a job and ask as for a new one upon completion.
def gen_coroutine():
"""Generator coroutine yielding from sub-generator inside."""
# do stuff... first time when 'next' gets called
print("receive job, build generator & yield, loop")
while True:
try:
func, seq = yield "send me work ... or I quit with next next()"
except TypeError:
return "no job left"
else:
yield from (func(*args) for args in seq)
def do_job(gen, job):
"""Run all tasks in job."""
print(gen.send(job))
while True:
result = next(gen)
print(result)
if result == "send me work ... or I quit with next next()":
break
Now we run gen_coroutinewith our helper function do_joband two jobs.
gen_co = gen_coroutine()
next(gen_co) # start
# receive job, build generator & yield, loop <-- printed with first next()
# Out:'send me work ... or I quit with next next()'
do_job(gen_co, job1) # prints out all results from job
# 1
# 1.4142135623730951
# 1.7320508075688772
# 2.0
# send me work... or I quit with next next()
do_job(gen_co, job2) # send another job into generator
# 3
# 4
# 5
# 6
# send me work... or I quit with next next()
next(gen_co)
# Traceback ...
# StopIteration: no job left
To come back to your question which version is the better approach in general.
IMO something like gen_factory makes only sense if you need the same thing done for multiple generators you are going to create, or in cases your construction process for generators is complicated enough to justify use of a factory instead of building individual generators in place with a generator comprehension.
Note:
The description above for the gen_generator function (second mymap) states
"it is a generator itself". That is a bit vague and technically not
really correct, but facilitates reasoning about the differences of the functions
in this tricky setup where gen_factory also returns a generator, namely that
one built by the generator comprehension inside.
In fact any function (not only those from this question with generator comprehensions inside!) with a yield inside, upon invocation, just
returns a generator object which gets constructed out of the function body.
type(gen_coroutine) # function
gen_co = gen_coroutine(); type(gen_co) # generator
So the whole action we observed above for gen_generator and gen_coroutine
takes place within these generator objects, functions with yield inside have spit out before.
The answer is: return a generator. It's more fast:
marco#buzz:~$ python3.9 -m pyperf timeit --rigorous --affinity 3 --value 6 --loops=4096 -s '
a = range(1000)
def f1():
for x in a:
yield x
def f2():
return f1()
' 'tuple(f2())'
........................................
Mean +- std dev: 72.8 us +- 5.8 us
marco#buzz:~$ python3.9 -m pyperf timeit --rigorous --affinity 3 --value 6 --loops=4096 -s '
a = range(1000)
def f1():
for x in a:
yield x
def f2():
yield from f1()
' 'tuple(f2())'
........................................
WARNING: the benchmark result may be unstable
* the standard deviation (12.6 us) is 10% of the mean (121 us)
Try to rerun the benchmark with more runs, values and/or loops.
Run 'python3.9 -m pyperf system tune' command to reduce the system jitter.
Use pyperf stats, pyperf dump and pyperf hist to analyze results.
Use --quiet option to hide these warnings.
Mean +- std dev: 121 us +- 13 us
If you read PEP 380, the main reason for the introduction of yield from is to use a part of the code of a generator for another generator, without having to duplicate the code or change the API:
The rationale behind most of the semantics presented above stems from
the desire to be able to refactor generator code. It should be
possible to take a section of code containing one or more yield
expressions, move it into a separate function (using the usual
techniques to deal with references to variables in the surrounding
scope, etc.), and call the new function using a yield from expression.
Source
The most important difference (I don't know if yield from generator is optimized) is that the context is different for return and yield from.
[ins] In [1]: def generator():
...: yield 1
...: raise Exception
...:
[ins] In [2]: def use_generator():
...: return generator()
...:
[ins] In [3]: def yield_generator():
...: yield from generator()
...:
[ins] In [4]: g = use_generator()
[ins] In [5]: next(g); next(g)
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-5-3d9500a8db9f> in <module>
----> 1 next(g); next(g)
<ipython-input-1-b4cc4538f589> in generator()
1 def generator():
2 yield 1
----> 3 raise Exception
4
Exception:
[ins] In [6]: g = yield_generator()
[ins] In [7]: next(g); next(g)
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-7-3d9500a8db9f> in <module>
----> 1 next(g); next(g)
<ipython-input-3-3ab40ecc32f5> in yield_generator()
1 def yield_generator():
----> 2 yield from generator()
3
<ipython-input-1-b4cc4538f589> in generator()
1 def generator():
2 yield 1
----> 3 raise Exception
4
Exception:
I prefer the version with yield from because it makes it easier to handle exceptions and context managers.
Take the example of a generator expression for the lines of a file:
def with_return(some_file):
with open(some_file, 'rt') as f:
return (line.strip() for line in f)
for line in with_return('/tmp/some_file.txt'):
print(line)
The return version raises a ValueError: I/O operation on closed file. since the file is not open anymore after the return statement.
On the other hand, the yield from version works as expected:
def with_yield_from(some_file):
with open(some_file, 'rt') as f:
yield from (line.strip() for line in f)
for line in with_yield_from('/tmp/some_file.txt'):
print(line)
Generators use yield, functions use return.
Generators are generally used in for loops for repeatedly iterating over the values automatically provided by a generator, but may be used also in another context, e. g. in list() function to create list - again from values automatically provided by a generator.
Functions are called to provide return value, only one value for every call.
Really it depends on the situation. yield is mainly suited to cases where you just want to iterate over the returned values and then manipulate them. return is mainly suited for when you want to store all of the values that your function has generated in memory rather than just iterate over them once. Do note that you can only iterate over a generator (what yield returns) once, there are some algorithms which this is definitely not suited for.

How do Python generator functions maintain local state?

According to the docs at https://docs.python.org/2/reference/simple_stmts.html#yield,
all local state is retained, including the current bindings of local variables, the instruction pointer, and the internal evaluation stack: enough information is saved so that the next time next() is invoked, the function can proceed exactly as if the yield statement were just another external call.
Here's a simple case:
def generator():
my_list = range(10)
print "my_list got assigned"
for i in my_list:
print i
yield
return
In the shell, generator() behaves like this:
>>>>generator().next()
my_list got assigned
0
>>>>generator().next()
my_list got assigned
0
I would have thought that my_list would not get reassigned each time .next() is called. Can someone explain why this happens, and why it seems like the docs contradict this?
You are creating a new generator object each time. Create one instance:
g = generator()
g.next()
g.next()
Here g references the generator object that maintains the state:
>>> def generator():
... my_list = range(10)
... print "my_list got assigned"
... for i in my_list:
... print i
... yield
... return
...
>>> g = generator()
>>> g
<generator object generator at 0x100633f50>
>>> g.next()
my_list got assigned
0
>>> g.next()
1
Yes, the generator maintains state, as you correctly found in the documentation. The problem is that in your example, you're creating two generators. The first one is not assigned to any variable, so it's discarded immediately after .next() completes. Then you create the second generator, with its own local state, that starts at the beginning.
Try this:
>>> mygen = generator()
>>> mygen.next()
my_list got assigned
0
>>> mygen.next()
1
You're creating a new instance of generator when you call generator().
If you instead did
my_generator = generator()
my_generator.next() # my_list got assigned, 0
my_generator.next() # 1
the list would only be assigned once.

Attempting to understand yield as an expression

I'm playing around with generators and generator expressions and I'm not completely sure that I understand how they work (some reference material):
>>> a = (x for x in range(10))
>>> next(a)
0
>>> next(a)
1
>>> a.send(-1)
2
>>> next(a)
3
So it looks like generator.send was ignored. That makes sense (I guess) because there is no explicit yield expression to catch the sent information ...
However,
>>> a = ((yield x) for x in range(10))
>>> next(a)
0
>>> print next(a)
None
>>> print next(a)
1
>>> print next(a)
None
>>> a.send(-1) #this send is ignored, Why? ... there's a yield to catch it...
2
>>> print next(a)
None
>>> print next(a)
3
>>> a.send(-1) #this send isn't ignored
-1
I understand this is pretty far out there, and I (currently) can't think of a use-case for this (so don't ask;)
I'm mostly just exploring to try to figure out how these various generator methods work (and how generator expressions work in general). Why does my second example alternate between yielding a sensible value and None? Also, Can anyone explain why one of my generator.send's was ignored while the other wasn't?
The confusion here is that the generator expression is doing a hidden yield. Here it is in function form:
def foo():
for x in range(10):
yield (yield x)
When you do a .send(), what happens is the inner yield x gets executed, which yields x. Then the expression evaluates to the value of the .send, and the next yield yields that. Here it is in clearer form:
def foo():
for x in range(10):
sent_value = (yield x)
yield sent_value
Thus the output is very predictable:
>>> a = foo()
#start it off
>>> a.next()
0
#execution has now paused at "sent_value = ?"
#now we fill in the "?". whatever we send here will be immediately yielded.
>>> a.send("yieldnow")
'yieldnow'
#execution is now paused at the 'yield sent_value' expression
#as this is not assigned to anything, whatever is sent now will be lost
>>> a.send("this is lost")
1
#now we're back where we were at the 'yieldnow' point of the code
>>> a.send("yieldnow")
'yieldnow'
#etc, the loop continues
>>> a.send("this is lost")
2
>>> a.send("yieldnow")
'yieldnow'
>>> a.send("this is lost")
3
>>> a.send("yieldnow")
'yieldnow'
EDIT: Example usage. By far the coolest one I've seen so far is twisted's inlineCallbacks function. See here for an article explaining it. The nub of it is it lets you yield functions to be run in threads, and once the functions are done, twisted sends the result of the function back into your code. Thus you can write code that heavily relies on threads in a very linear and intuitive manner, instead of having to write tons of little functions all over the place.
See the PEP 342 for more info on the rationale of having .send work with potential use cases (the twisted example I provided is an example of the boon to asynchronous I/O this change offered).
You're confusing yourself a bit because you actually are generating from two sources: the generator expression (... for x in range(10)) is one generator, but you create another source with the yield. You can see that if do list(a) you'll get [0, None, 1, None, 2, None, 3, None, 4, None, 5, None, 6, None, 7, None, 8, None, 9, None].
Your code is equivalent to this:
>>> def gen():
... for x in range(10):
... yield (yield x)
Only the inner yield ("yield x") is "used" in the generator --- it is used as the value of the outer yield. So this generator iterates back and forth between yielding values of the range, and yielding whatever is "sent" to those yields. If you send something to the inner yield, you get it back, but if you happen to send on an even-numbered iteration, the send is sent to the outer yield and is ignored.
This generator translates into:
for i in xrange(10):
x = (yield i)
yield x
Result of second call to send()/next() are ignored, because you do nothing with result of one of yields.
The generator you wrote is equivalent to the more verbose:
def testing():
for x in range(10):
x = (yield x)
yield x
As you can see here, the second yield, which is implicit in the generator expression, does not save the value you pass it, therefore depending on where the generator execution is blocked the send may or may not work.
Indeed - the send method is meant to work with a generator object that is the result of a co-routine you have explicitly written. It is difficult to get some meaning to it in a generator expression - though it works.
-- EDIT --
I had previously written this, but it is incorrecct, as yield inside generator expressions are predictable across implementations - though not mentioned in any PEP.
generator expressions are not meant to have the yield keyword - I am
not shure the behavior is even defined in this case. We could think a
little and get to what is happening on your expression, to meet from
where those "None"s are coming from. However, assume that as a side
effect of how the yield is implemented in Python (and probably it is
even implementation dependent), not as something that should be so.
The correct form for a generator expression, in a simplified manner is:
(<expr> for <variable> in <sequence> [if <expr>])
so, <expr> is evaluated for each value in the <sequence: - not only is yield uneeded, as you should not use it.
Both yield and the send methods are meant to be used in full co-routines, something like:
def doubler():
value = 0
while value < 100:
value = 2 * (yield value)
And you can use it like:
>>> a = doubler()
>>> # Next have to be called once, so the code will run up to the first "yield"
...
>>> a.next()
0
>>> a.send(10)
20
>>> a.send(20)
40
>>> a.send(23)
46
>>> a.send(51)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>>

What is a "yield" statement in a function? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
The Python yield keyword explained
Can someone explain to me what the yield statement actually does in this bit of code here:
def fibonacci():
a, b = 0, 1
while True:
yield a
a, b = b, a+b
for number in fibonacci(): # Use the generator as an iterator; print number
What I understand so far is, we are defining a function finonacci(), with no parameters?
inside the function we are defining a and b equal to 0 and 1, next, while this is true, we are yielding a. What is this actually doing? Furthermore, while yielding a? a is now equal to b, while b is now equal to a + b.
Next question, for number in fibonacci(), does this mean for every number in the function or what? I'm equally stumped on what yield and 'for number' are actually doing. Obviously I am aware that it means for every number in fibonacci() print number. Am I actually defining number without knowing it?
Thanks, sorry if I'm not clear. BTW, it's for project Euler, if I knew how to program well this would be a breeze but I'm trying to learn this on the fly.
Using yield makes the function a generator.
The generator will continue to yield the a variable on each loop, waiting until the generator's next() method is called to continue on to the next loop iteration.
Or, until you return or StopIteration is raised.
Slightly modified to show use of StopIteration:
>>> def fib():
... a = 0
... b = 1
... while True:
... yield a
... a = b
... b += a
... if a > 100:
... raise StopIteration
...
>>>
>>> for value in fib():
... print value
...
0
1
2
4
8
16
32
64
>>>
>>> # assign the resulting object to 'generator'
>>> generator = fib()
>>> generator.next()
0
>>> generator.next()
1
>>> for value in generator:
... print value
...
2
4
8
16
32
64
>>>
Generators have a special property of being iterables which do not consume memories for their values.
They do this by calculating the new value, when it is required while being iterated.
i.e.
def f():
a = 2
yield a
a += 1
for ele in f():
print ele
would print
2
So you are using a function as an iterable that keeps returning values.
This is especially useful when you require heavy memory usage, and so you cannot afford the use of a list comprehension
i.e.
li = [ele*10 for ele in range(10)]
takes 10 memory spaces for ints as a list
but if you simple want to iterate over it, not access it individually
it would be very memory efficient to instead use
def f():
i=0
while i<10
yield i*10
i += 1
which would use 1 memory space as i keeps being reused
a short cut for this is
ge = (i*10 for i in range(10))
you can do any of the following
for ele in f():
for ele in li:
for ele in ge:
to obtain equivalent results
When the code calls fibonacci a special generator object is created. Please note, that no code gets executed - only a generator object is returned. When you are later calling its next method, the function executes until it encounters a yield statement. The object that is supplied to yield is returned. When you call next method again the function executes again until it encounters a yield. When there are no more yield statements and the end of function is reached, a StopIteration exception is raised.
Please note that the objects inside the function are preserved between the calls to next. It means, when the code continues execution on the next loop, all the objects that were in the scope from which yield was called have their values from the point where a previous next call returned.
The cool thing about generators is that they allow convenient iteration with for loops.
The for loop obtains a generator from the result of fibonacci call and then executes the loop retrieving elements using next method of generatior object until StopIteration exception is encountered.
This answer is a great explanation of the yield statement, and also of iterators and generators.
Specifically here, the first call to fibonaci() will initialize a to 0, b to 1, enter the while loop and return a.
Any next call will start after the yield statement, affect b to a, a+b to b, and then go to the next iteration of the while statement, reach again the yield statement, and return a again.

what does yield as assignment do? myVar = (yield)

I'm familiar with yield to return a value thanks mostly to this question
but what does yield do when it is on the right side of an assignment?
#coroutine
def protocol(target=None):
while True:
c = (yield)
def coroutine(func):
def start(*args,**kwargs):
cr = func(*args,**kwargs)
cr.next()
return cr
return start
I came across this, on the code samples of this blog, while researching state machines and coroutines.
The yield statement used in a function turns that function into a "generator" (a function that creates an iterator). The resulting iterator is normally resumed by calling next(). However it is possible to send values to the function by calling the method send() instead of next() to resume it:
cr.send(1)
In your example this would assign the value 1 to c each time.
cr.next() is effectively equivalent to cr.send(None)
You can send values to the generator using the send function.
If you execute:
p = protocol()
p.next() # advance to the yield statement, otherwise I can't call send
p.send(5)
then yield will return 5, so inside the generator c will be 5.
Also, if you call p.next(), yield will return None.
You can find more information here.
yield returns a stream of data as per the logic defined within the generator function.
However, send(val) is a way to pass a desired value from outside the generator function.
p.next() doesn't work in python3, next(p) works (built-in) for both python 2,3
p.next() doesn't work with python 3, gives the following error,however it still works in python 2.
Error: 'generator' object has no attribute 'next'
Here's a demonstration:
def fun(li):
if len(li):
val = yield len(li)
print(val)
yield None
g = fun([1,2,3,4,5,6])
next(g) # len(li) i.e. 6 is assigned to val
g.send(8) # 8 is assigned to val

Categories

Resources