Using generator send() within a for loop - python

I implemented graph traversal as a generator function which yields the node being visited.
Sometimes the user needs to tell the traversal function that the edges outgoing from a particular node shouldn't be followed; in order to support that, the traversal checks the value sent back to it (using generator send() method), and if it's True, regards the node as a leaf for traversal purposes.
The problem is that the simplest user loop is kinda long:
# simplified thanks to #tobias_k
# bfs is the traversal generator function
traversal = bfs(g, start_node)
try:
n = next(traversal)
while True:
# process(n) returns True if don't want to follow edges out of n
n = traversal.send(process(n))
except StopIteration:
pass
Is there any way to improve this?
I thought something like this should work:
for n in bfs(g, start_node):
???.send(process(n))
but I feel I'm missing the knowledge of some python syntax.

I don't see a way to do this in a regular for loop. However, you could create another generator, that iterates another generator, using some "follow-function" to determine whether to follow the current element, thus encapsulating the tricky parts of your code into a separate function.
def checking_generator(generator, follow_function):
try:
x = next(generator)
while True:
yield x
x = generator.send(follow_function(x))
except StopIteration:
pass
for n in checking_generator(bfs(g, start_node), process):
print(n)

I discovered that my question would have had a one-line answer, using the extended "continue" statement proposed in the earlier version of PEP 342:
for n in bfs(g, start_node):
continue process(n)
However, while PEP 342 was accepted, that particular feature was withdrawn after this June 2005 discussion between Raymond and Guido:
Raymond Hettinger said:
Let me go on record as a strong -1 for "continue EXPR". The
for-loop is our most basic construct and is easily understood in its
present form. The same can be said for "continue" and "break" which
have the added advantage of a near zero learning curve for people
migrating from other languages.
Any urge to complicate these basic statements should be seriously
scrutinized and held to high standards of clarity, explainability,
obviousness, usefulness, and necessity. IMO, it fails most of those
tests.
I would not look forward to explaining "continue EXPR" in the tutorial
and think it would stand out as an anti-feature.
[...] The correct argument against "continue EXPR" is that there
are no use cases yet; if there were a good use case, the explanation
would follow easily.
Guido
If python core developers have since changed their mind about the usefulness of extended "continue", perhaps this could be reintroduced into a future PEP. But, given a nearly identical use case as in this question was already discussed in the quoted thread, and wasn't found persuasive, it seems unlikely.

To simplify the client code, you could use an ordinary bsf() generator and check node.isleaf attribute in it:
for node in bfs(g, start_node):
node.isleaf = process(node) # don't follow if `process()` returns True
The disadvantage is that node is mutable. Or you have to pass a shared data structure that tracks leaf nodes: leaf[node] = process(node) where leaf dictionary is passed into bfs() earlier.
If you want to use .send() method explicitly; you have to handle StopIteration. See PEP 479 -- Change StopIteration handling inside generators. You could hide it in a helper function:
def traverse(tree_generator, visitor):
try:
node = next(tree_generator)
while True:
node = tree_generator.send(visitor(node))
except StopIteration:
pass
Example:
traverse(bfs(g, start_node), process)

I don't see this as a frequent use case, consider this as the original generator:
def original_gen():
for x in range(10):
should_break = yield x
if should_break:
break
If the value of should_break is always calculated based on some function call with x then why not just write the generator like this:
def processing_gen(check_f):
for x in range(10):
yield x
should_break = check_f(x)
if should_break:
break
However I usually think of the code that processes the generated values as being written inside the loop (otherwise what is the point of having a loop at all?)
What it really seems you want to do is create a generator where calling the __next__ method really implies send(process(LAST_VALUE)) which can be implemented with a class:
class Followup_generator(): #feel free to use a better name
def __init__(self,generator,following_function):
self.gen = generator
self.process_f = following_function
def __iter__(self):
return self
def __next__(self):
if hasattr(self,"last_value"):
return self.send(self.process_f(self.last_value))
else:
self.last_value = next(self.gen)
return self.last_value
def send(self,arg):
self.last_value = self.gen.send(arg)
return self.last_value
def __getattr__(self,attr):
"forward other lookups to the generator (.throw etc.)"
return getattr(self.gen, attr)
# call signature is the exact same as #tobias_k's checking_generator
traversal = Followup_generator(bfs(g, start_node), process)
for n in traversal:
print(n)
n = traversal.send(DATA) #you'd be able to send extra values to it
However this still doesn't see this as frequently used, I'd be perfectly fine with a while loop, although I'd put the .send call at the top:
traversal = bfs(g, start_node)
send_value = None
while True:
n = traversal.send(send_value)
#code for loop, ending in calculating the next send_value
send_value = process(n)
And you might wrap that in a try: ... except StopIteration:pass although I find that simply waiting for an error to raise is better expressed with a context manager:
class Catch:
def __init__(self,exc_type):
if issubclass(exc_type,BaseException):
self.catch_type = exc_type
else:
raise TypeError("can only catch Exceptions")
def __enter__(self):
return self
def __exit__(self,exc_type,err, tb):
if issubclass(exc_type, self.catch_type):
self.err = err
return True
with Catch(StopIteration):
traversal = bfs(g, start_node)
send_value = None
while True:
n = traversal.send(send_value)
#code for loop, ending in calculating the next send_value
send_value = process(n)

Probably this is the answer to the question from the thread's topic.
Take a look at the additional empty yields statements inside the traversal function and custom send function, that does the magical job.
# tested with Python 3.7
def traversal(n):
for i in range(n):
yield i, '%s[%s] %s' % (' ' * (4 - n), n, i)
stop = yield
if stop:
yield # here's the first part of the magic
else:
yield # the same as above
yield from traversal(int(n / 2))
def send(generator, value):
next(generator) # here's the second part of the magic
generator.send(value)
g = traversal(4)
for i, (num, msg) in enumerate(g):
print('>', i, msg)
stop = num % 2 == 0
send(g, stop)

I've written a small class SettableGenerator which uses a method to receive the value to be send and then forwards it to the actual generator when __next__ is invoked.
With this you can write:
gen = SettableGenerator(bfs(g, start_node))
for n in gen:
gen.set(process(n))

Let's consider the following generator. It generates numbers from 0 to 9. For every generated number, it gets an input and stores it into ret:
def count_to_nine():
# Output: numbers from 0 to 9
# Input: converted numbers
ret = []
for i in range(10):
# Yield a number, get something back
val = (yield i)
# Remember that "something"
ret.append(val)
return ret
You can, indeed, iterate it using next() + send(),
but the best way is to iterate using send() alone:
g = count_to_nine()
value = None # to make sure that the first send() gives a None
while True:
value = g.send(value) # send the previously generated value, get a new one
value = f'#{value}'
Here's the result:
StopIteration: ['#0', '#1', '#2', '#3', '#4', '#5', '#6', '#7', '#8', '#9']
If you want that output, catch the StopIteration and get the result from it.
Cheers!

Related

Mixing yield and return. `yield [cand]; return` vs `return [[cand]]`. Why do they lead to different output? [duplicate]

This question already has answers here:
Return in generator together with yield
(2 answers)
Closed last year.
Why does
yield [cand]
return
lead to different output/behavior than
return [[cand]]
Minimal viable example
uses recursion
the output of the version using yield [1]; return is different than the output of the version using return [[1]]
def foo(i):
if i != 1:
yield [1]
return
yield from foo(i-1)
def bar(i):
if i != 1:
return [[1]]
yield from bar(i-1)
print(list(foo(1))) # [[1]]
print(list(bar(1))) # []
Min viable counter example
does not use recurion
the output of the version using yield [1]; return is the same as the output of the version using return [[1]]
def foo():
yield [1]
return
def foofoo():
yield from foo()
def bar():
return [[1]]
def barbar():
yield from bar()
print(list(foofoo())) # [[1]]
print(list(barbar())) # [[1]]
Full context
I'm solving Leetcode #39: Combination Sum and was wondering why one solution works, but not the other:
Working solution
from functools import cache # requires Python 3.9+
class Solution:
def combinationSum(self, candidates: List[int], target: int) -> List[List[int]]:
#cache
def helper(targ, i=0):
if i == N or targ < (cand := candidates[i]):
return
if targ == cand:
yield [cand]
return
for comb in helper(targ - cand, i):
yield comb + [cand]
yield from helper(targ, i+1)
N = len(candidates)
candidates.sort()
yield from helper(target)
Non-working solution
from functools import cache # requires Python 3.9+
class Solution:
def combinationSum(self, candidates: List[int], target: int) -> List[List[int]]:
#cache
def helper(targ, i=0):
if i == N or targ < (cand := candidates[i]):
return
if targ == cand:
return [[cand]]
for comb in helper(targ - cand, i):
yield comb + [cand]
yield from helper(targ, i+1)
N = len(candidates)
candidates.sort()
yield from helper(target)
Output
On the following input
candidates = [2,3,6,7]
target = 7
print(Solution().combinationSum(candidates, target))
the working solution correctly prints
[[3,2,2],[7]]
while the non-working solution prints
[]
I'm wondering why yield [cand]; return works, but return [[cand]] doesn't.
In a generator function, return just defines the value associated with the StopIteration exception implicitly raised to indicate an iterator is exhausted. It's not produced during iteration, and most iterating constructs (e.g. for loops) intentionally ignore the StopIteration exception (it means the loop is over, you don't care if someone attached random garbage to a message that just means "we're done").
For example, try:
>>> def foo():
... yield 'onlyvalue' # Existence of yield keyword makes this a generator
... return 'returnvalue'
...
>>> f = foo() # Makes a generator object, stores it in f
>>> next(f) # Pull one value from generator
'onlyvalue'
>>> next(f) # There is no other yielded value, so this hits the return; iteration over
--------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
...
StopIteration: 'returnvalue'
As you can see, your return value does get "returned" in a sense (it's not completely discarded), but it's never seen by anything iterating normally, so it's largely useless. Outside of rare cases involving using generators as coroutines (where you're using .send() and .throw() on instances of the generator and manually advancing it with next(genobj)), the return value of a generator won't be seen.
In short, you have to pick one:
Use yield anywhere in a function, and it's a generator (whether or not the code path of a particular call ever reaches a yield) and return just ends generation (while maybe hiding some data in the StopIteration exception). No matter what you do, calling the generator function "returns" a new generator object (which you can loop over until exhausted), it can never return a raw value computed inside the generator function (which doesn't even begin running until you loop over it at least once).
Don't use yield, and return works as expected (because it's not a generator function).
As an example to explain what happens to the return value in normal looping constructs, this is what for x in gen(): effectively expands to a C optimized version of:
__unnamed_iterator = iter(gen())
while True:
try:
x = next(__unnamed_iterator)
except StopIteration: # StopIteration caught here without inspecting it
break # Loop ends, StopIteration exception cleaned even from sys.exc_info() to avoid possible reference cycles
# body of loop goes here
# Outside of loop, there is no StopIteration object left
As you can see, the expanded form of the for loop has to look for a StopIteration to indicate the loop is over, but it doesn't use it. And for anything that's not a generator, the StopIteration never has any associated values; the for loop has no way to report them even if it did (it has to end the loop when it's told iteration is over, and the arguments to StopIteration are explicitly not part of the values iterated anyway). Anything else that consumes the generator (e.g. calling list on it) is doing roughly the same thing as the for loop, ignoring the StopIteration in the same way; nothing except code that specifically expects generators (as opposed to more generalized iterables and iterators) will ever bother to inspect the StopIteration object (at the C layer, there are optimizations that StopIteration objects aren't even produced by most iterators; they return NULL and leave the set exception empty, which all iterator protocol using things know is equivalent to returning NULL and setting a StopIteration object, so for anything but a generator, there isn't even an exception to inspect much of the time).

"Double" iterator and a generator function

I want to get "next asset" with an iterator-like object, but (instead of __next__() method) there are two algorithms loading next asset (next1 and next2 below), which can be implemented as a "quasi-iterator" like:
class AssetLoader(object):
def __init___(self):
pass
def next1(self):
# ...
def next2(self):
# ...
To be clear, what is the next retrieved object may depends on the "history" of calling next1 and next2, like:
next1(); next1(); next2(); next1(); next2()
My question: May this (two kinds of "next" step in an iterator) be implemented as a generator function?
I guess this can be done with a global variable to which the function refers. But can it be done without using global variables, but with some local variable?
If it is hard or impossible with current Python, can we discuss how to add new semantics to Python to make it possible?
Here's a simple example of using send to switch a generator between two different iteration modes: it either increments its current value or multiplies it. The same principle can be applied to your graph traversal task.
The send method allows you to send an object into the generator. Annoyingly, the result of send is the current value that you would have obtained by calling next; it would be nice if you could send without having the generator yield a value, but that's just something we have to live with.
def add_or_mul(current, step, scale, mode='add'):
''' A generator that either adds step to the current value,
or multiplies it by scale
'''
while True:
newmode = yield current
if newmode is not None:
if newmode not in ('add', 'mul'):
raise ValueError('Bad mode: ' + newmode)
mode = newmode
if mode == 'add':
current += step
else:
current *= scale
# Test
gen = add_or_mul(1, 1, 2)
for i in range(5):
print(next(gen))
print(gen.send('mul'))
for i in range(4):
print(next(gen))
print(gen.send('add'))
for i in range(4):
print(next(gen))
output
1
2
3
4
5
10
20
40
80
160
161
162
163
164
165
If you have trouble applying this technique to your graph traversal task please ask a fresh question (possibly linking to this one) that includes some relevant graph code, so that answerers don't have to write that stuff from scratch in order to test and demonstrate their code.
You can try this:
class AssetLoader(object):
def __init___(self):
self.current_next = self.next1
def next1(self):
if condition:
self.current_next = self.next2
elif conition:
return x
else:
raise StopIteration
def next2(self):
if condition:
self.current_next = self.next1
elif conition:
return y
else:
raise StopIteration
def __next__(self):
return self.current_next()
def __iter__(self):
return self

How can I write a Python decorator to increase stackdepth?

BACKGROUND
When playing around, I often write simple recursive functions looking something like:
def f(a,b):
if a>=0 and b>=0:
return min( f(a-1,b) , f(b,a-1) ) # + some cost that depends on a,b
else:
return 0
(For example, when computing weighted edit distances, or evaluating recursively defined mathematical formulas.)
I then use a memoizing decorator to cache the results automatically.
PROBLEM
When I try something like f(200,10) I get:
RuntimeError: maximum recursion depth exceeded
This is as expected because the recursive implementation exhausts Python's stack space/ recursion limits.
WORKAROUNDS
I usually work around this problem by one of:
Increasing recursion limit with sys.setrecursionlimit (only works up to about 1000 depth)
Using a for loop to fill up the cache for smaller values
Changing the function to use a list as a manual stack (via append and pop calls) (in other words, moving from a recursive implementation to an iterative one)
Using an alternative programming language
but I find all of these quite error prone.
QUESTION
Is there a way to write an #Bigstack decorator that would simulate the effect of having a really big stack?
Note that my functions normally make several recursive function calls so this is not the same as tail recursion - I really do want to save all the internal state of each function on the stack.
WHAT I'VE TRIED
I've been thinking about using a list of generator expressions as my stack. By probing the stackframe I could work out when the function has been called recursively and then trigger an exception to return to the decorator code. However, I can't work out a way of gluing these ideas together to make anything that actually works.
Alternatively, I could try accessing the abstract syntax tree for the function and try transforming calls to recursive functions to yield statements, but this seems like it's heading in the wrong direction.
Any suggestions?
EDIT
It certainly looks like I am misusing Python, but another approach I have been considering is to use a different thread for each block of, say, 500 stack frames and then insert queues between each consecutive pair of threads - one queue for arguments, and another queue for return values. (Each queue will have at most one entry in it.) I think this probably doesn't work for some reason - but I'll probably only work out why after I've tried to implement it.
To get around the recursion limit, you can catch the RuntimeError exception to detect when you've run out of stack space, and then return a continuation-ish function that, when called, restarts the recursion at the level where you ran out of space. Call this (and its return value, and so on) until you get a value, then try again from the top. Once you've memoized the lower levels, the higher levels won't run into a recursion limit, so eventually this will work. Put the repeated-calling-until-it-works in a wrapper function. Basically it's a lazy version of your warming-up-the-cache idea.
Here's an example with a simple recursive "add numbers from 1 to n inclusive" function.
import functools
def memoize(func):
cache = {}
#functools.wraps(func)
def wrapper(*args, **kwargs):
key = args, tuple(sorted(kwargs.items()))
if key in cache:
return cache[key]
else:
result = func(*args, **kwargs)
if not callable(result):
cache[key] = result
return result
return wrapper
#memoize
def _addup(n):
if n < 2:
return n
else:
try:
result = _addup(n - 1)
except RuntimeError:
return lambda: _addup(n)
else:
return result if callable(result) else result + n
def addup(n):
result = _addup(n)
while callable(result):
while callable(result):
result = result()
result = _addup(n)
return result
assert addup(5000) == sum(xrange(5001))
Rather than returning the lambda function all the way back up the call chain, we can raise an exception to short-circuit that, which both improves performance and simplifies the code:
# memoize function as above, or you can probably use functools.lru_cache
class UnwindStack(Exception):
pass
#memoize
def _addup(n):
if n < 2:
return n
else:
try:
return _addup(n - 1) + n
except RuntimeError:
raise UnwindStack(lambda: _addup(n))
def _try(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except UnwindStack as e:
return e[0]
def addup(n):
result = _try(_addup, n)
while callable(result):
while callable(result):
result = _try(result)
result = _try(_addup, n)
return result
This remains pretty inelegant, though, and still has a fair amount of overhead, and I can't imagine how you'd make a decorator out it. Python isn't really suited to this kind of thing, I guess.
Here's an implementation that uses a list of generator expressions as the stack:
def run_stackless(frame):
stack, return_stack = [(False, frame)], []
while stack:
active, frame = stack.pop()
action, res = frame.send(return_stack.pop() if active else None)
if action == 'call':
stack.extend([(True, frame), (False, res)])
elif action == 'tail':
stack.append((False, res))
elif action == 'return':
return_stack.append(res)
else:
raise ValueError('Unknown action', action)
return return_stack.pop()
To use it you need to transform the recursive function according to the following rules:
return expr -> yield 'return', expr
recursive_call(args...) -> (yield 'call', recursive_call(args...))
return recursive_call(args...) -> yield 'tail', recursive_call(args...)
For example, with the cost function as a * b, your function becomes:
def f(a,b):
if a>=0 and b>=0:
yield 'return', min((yield 'call', f(a-1,b)),
(yield 'call', f(b,a-1))) + (a * b)
else:
yield 'return', 0
Testing:
In [140]: run_stackless(g(30, 4))
Out[140]: 410
In Python 2.6.2 it appears to offer a ~8-10x performance hit compared to direct calls.
The tail action is for tail recursion:
def factorial(n):
acc = [1]
def fact(n):
if n == 0:
yield 'return', 0
else:
acc[0] *= n
yield 'tail', fact(n - 1)
run_stackless(fact(n))
return acc[0]
The transformation to generator-recursive style is fairly easy, and could probably be done as a bytecode hack.
This approach combines memoisation and increased stack depth into a single decorator.
I generate a pool of threads with each thread responsible for 64 levels of the stack.
Threads are only created once and resued (but currently never deleted).
Queues are used to pass information between threads, although note that only the thread corresponding to the current stack depth will actually have work to do.
My experiments suggest this adds around 10% overhead for a simple recursive function (and should be less for more complicated functions).
import threading,Queue
class BigstackThread(threading.Thread):
def __init__(self,send,recv,func):
threading.Thread.__init__( self )
self.daemon = True
self.send = send
self.recv = recv
self.func = func
def run(self):
while 1:
args = self.send.get()
v = self.func(*args)
self.recv.put(v)
class Bigstack(object):
def __init__(self,func):
self.func = func
self.cache = {}
self.depth = 0
self.threadpool = {}
def __call__(self,*args):
if args in self.cache:
return self.cache[args]
self.depth+=1
if self.depth&63:
v = self.func(*args)
else:
T=self.threadpool
if self.depth not in T:
send = Queue.Queue(1)
recv = Queue.Queue(1)
t = BigstackThread(send,recv,self)
T[self.depth] = send,recv,t
t.start()
else:
send,recv,_ = T[self.depth]
send.put(args)
v = recv.get()
self.depth-=1
self.cache[args]=v
return v
#Bigstack
def f(a,b):
if a>=0 and b>=0:
return min(f(a-1,b),f(b-1,a))+1
return 0

Recursion using yield

Is there any way to mix recursion and the yield statement? For instance, a infinite number generator (using recursion) would be something like:
def infinity(start):
yield start
# recursion here ...
>>> it = infinity(1)
>>> next(it)
1
>>> next(it)
2
I tried:
def infinity(start):
yield start
infinity(start + 1)
and
def infinity(start):
yield start
yield infinity(start + 1)
But none of them did what I want, the first one stopped after it yielded start and the second one yielded start, then the generator and then stopped.
NOTE: Please, I know you can do this using a while-loop:
def infinity(start):
while True:
yield start
start += 1
I just want to know if this can be done recursively.
Yes, you can do this:
def infinity(start):
yield start
for x in infinity(start + 1):
yield x
This will error out once the maximum recursion depth is reached, though.
Starting from Python 3.3, you'll be able to use
def infinity(start):
yield start
yield from infinity(start + 1)
If you just call your generator function recursively without looping over it or yield from-ing it, all you do is build a new generator, without actually running the function body or yielding anything.
See PEP 380 for further details.
In some cases it might be preferable to use a stack instead of recursion for generators. It should be possible to rewrite a recursive method using a stack and a while loop.
Here's an example of a recursive method which uses a callback and can be rewritten using stack logic:
def traverse_tree(callback):
# Get the root node from somewhere.
root = get_root_node()
def recurse(node):
callback(node)
for child in node.get('children', []):
recurse(child)
recurse(root)
The above method traverses a node tree where each node has a children array which may contain child nodes. As each node is encountered, the callback is issued and the current node is passed to it.
The method could be used this way, printing out some property on each node.
def callback(node):
print(node['id'])
traverse_tree(callback)
Use a stack instead and write the traversal method as a generator
# A stack-based alternative to the traverse_tree method above.
def iternodes():
stack = [get_root_node()]
while stack:
node = stack.pop()
yield node
for child in reversed(node.get('children', [])):
stack.append(child)
(Note that if you want the same traversal order as originally, you need to reverse the order of children because the first child appended to the stack will be the last one popped.)
Now you can get the same behavior as traverse_tree above, but with a generator:
for node in iternodes():
print(node['id'])
This isn't a one-size-fits-all solution but for some generators you might get a nice result substituting stack processing for recursion.
def lprint(a):
if isinstance(a, list):
for i in a:
yield from lprint(i)
else:
yield a
b = [[1, [2, 3], 4], [5, 6, [7, 8, [9]]]]
for i in lprint(b):
print(i)

invoking yield for a generator in another function

suppose I have some manager object. This object's API has a main_hook function, that gets another function f as it's argument, and runs the given f in a loop, doing some stuff in between each iteration:
def main_hook(self,f):
while (self.shouldContinue()):
#do some preparations
f(self)
#do some tear down
Now, I also have (more accurately, would like to have) a function stop_and_do_stuff, that once called, stops main_hook dead in it's tracks, returns the control to whichever func called main_hook, and after that func finished what's it doing, get control back to main_hook and continue. Basically the result will be the same as doing
def main_hook(self,f):
while (self.shouldContinue()):
#do some preparations
yield
#do some tear down
Except that instead yield I want to have a call to f(), while giving f the option to call self.stop_and_do_stuff()
I can't work around this by making f also a generator for 2 reasons:
1.f isn't part of my API - it's given to me by a user who uses my lib
2.Even if could ask him to use yield, the place in the code in which he will need to call stop_and_do_stuff won't be directly inside f, rather in some place in the function stack which will be inside f(), but not directly in it, e.g
def h(manager):
#do stuff
if should stop:
manager.stop_and_do_stuff()
#do more stuff
def g(manager):
#some stuff
if should stop:
manager.stop_and_do_stuff()
#more stuff
if should stop again:
manager.stop_and_do_stuff()
if should call h:
h()
def f(manager):
g(manager)
so if I choose to make f a generator, I also need to make g a generator and also h, otherwise this trick won't work.
Is there any solution to all of this? maybe I'm trying to solve it the wrong way?
(I know this question is long and ugly - it's the best I could do. If something isn't clear please tell me and I'll clarify it)
EDIT
Maybe pep 342 is the solution?
My previous answer describes how to do this in Python2, which is very ugly. But now I ran across PEP 380: Syntax for Delegating to a Subgenerator. That does exactly what you ask. The only problem is that it requires Python3. But that shouldn't really be a problem.
Here's how it works:
def worker():
yield 1
yield 2
return 3
def main():
yield 0
value = yield from worker()
print('returned %d' % value)
yield 4
for m in main():
print('generator yields %d' % m)
The result of this is:
generator yields 0
generator yields 1
generator yields 2
returned 3
generator yields 4
Exceptions are passed through the way you would expect.
I believe I should also add an answer from the other point of view, ie not trying to explain how you could achieve what we can understand of what you are trying to do, but why yield definitely couldn't possibly work.
When a function contains yield keyword it is deeply modified. It is still a callable but not a normal function any more : it becomes a factory that return an iterator.
From the caller's point of view there is no difference between the three implementations below (except that the yield one is so much simpler).
##########################################
print "Function iterator using yield",
def gen():
for x in range(0, 10):
yield x
f = gen()
try:
while True:
print f.next(),
except StopIteration:
pass
for x in gen():
print x,
print
#########################################
print "Class iterator defining iter and next",
class gen2(object):
def __init__(self):
self.index = 0;
self.limit = 10;
def __iter__(self):
return self
def next(self):
if self.index >= self.limit:
raise StopIteration
self.index += 1;
return self.index - 1;
f = gen2()
try:
while True:
print f.next(),
except StopIteration:
pass
for x in gen2():
print x,
print
#########################################
print "Function iterator using iter() and sentinel",
def gen3():
def g3():
if g3.index is None:
g3.index = 0
g3.index += 1;
return g3.index - 1
g3.index = None
return iter(g3, 10)
f = gen3()
try:
while True:
print f.next(),
except StopIteration:
pass
for x in gen3():
print x,
print
Then you should understand that yield is not much about control flow, but about keeping call context inside variables. Once it is understood you have to decide if the API of main_loop really want to provide an iterator to it's caller. Then if so, if f may loop it must should also be an iterator (and there should be a loop around calls to f() like below).
def main_hook(self,f):
while (self.shouldContinue()):
#do some preparations
for v in f(self):
yield v
#do some tear down
But you should not care if f() has to call inner functions g(), etc. That is completely irrelevant. You provide a lib and it is your user problem to call with an appropriate iterable. If you believe your lib user won't be able to, you will have to change the overall design.
Hope it helps.
I don't understand the whole either (what does the main_hook caller look like ?), but i would say, Throw a StopNow exception, when you should stop, just like you should throw StopIteration when your generator is finished.
here is how i understood the thing as well as what i would do.
class StopNow(Exception):
pass
def main_hook(self,f):
got_stop_now_exc = False
while (!got_stop_now_exc and self.shouldContinue()):
#do some preparations
try:
f(self)
except StopNow:
got_stop_now_exc = True
#do some compulsary tear down, exception or not
def stop_and_do_stuff()
raise StopNow()
def my_f():
if needed:
stop_and_do_stuff()
def the_main_hook_caller():
while i_should:
managerthingie.main_hook(my_f)
do_stuff()
The behavior you describe looks exactly like a simple function call. Like below.
def f(manager):
print("Entering f")
manager.stop_and_do_stuff()
print("Exiting f")
class Manager(Object):
def shouldContinue(self):
return True
def stop_and_do_stuff(self):
print("Manager stop and do stuff")
def main_hook(self,f):
while self.shouldContinue()
print("Manager Setup")
f(self)
print("Manager Tear Down")
No problem if f() is provided by another user of if stop_and_do_stuff is called from some inner function. If you also want the manager to be able to unwind stack from stop_and_do_stuff and really exit in some cases, no problem. Just raise some exception from it and you would catch it from main_hook or upper code.
You should be able to do from inside stop_and_and_do_stuff() whatever you want to do from the caller of main hook. If not you should explain why.
What is unclear in the question is what's happening on the caller side of main_hook() and why you would want to be able to exit the main_hook loop, but not really. Either the main_loop caller expect a generator either it does not. You need to explain that part if you want to get a sensible answer (some context informations would also be nice, if you really explain WTF you are trying to do, and your real restrictions - you said f is provided by some other user and main_hook is in a lib, what of main_hook caller ? - there is probably well known usual solutions).
I am not quite sure what exactly you are trying to achieve, so maybe if you can explain the problem more instead of giving solution that would be better.
From my partial understanding why don't you do something like this
def main_hook(self,f):
while (self.shouldContinue()):
#do some preparations
stop_and_do_stuff = f(self)
if stop_and_do_stuff :
yield
#do some tear down
So basically f returns a flag to stop or not, and if it says stop we yield to function which called main_hook and that function can continue after doing some stuff
e.g.
class A(object):
def main_hook(self,f):
while (self.shouldContinue()):
#do some preparations
stop = f(self)
if stop:
yield
#do some tear down
def shouldContinue(self):
return True
def f(a):
return True
a = A()
for x in a.main_hook(f):
print x

Categories

Resources