is there an alternative way of calling next on python generators? - python

I have a generator and I would like to know if I can use it without having to worry about StopIteration , and I would like to use it without the for item in generator . I would like to use it with a while statement for example ( or other constructs ). How could I do that ?

built-in function
next(iterator[, default])
Retrieve the next item from the iterator by calling its __next__() method. If default is given, it is returned if the iterator is exhausted, otherwise StopIteration is raised.
In Python 2.5 and older:
raiseStopIteration = object()
def next(iterator, default=raiseStopIteration):
if not hasattr(iterator, 'next'):
raise TypeError("not an iterator")
try:
return iterator.next()
except StopIteration:
if default is raiseStopIteration:
raise
else:
return default

Another options is to read all generator values at once:
>>> alist = list(agenerator)
Example:
>>> def f():
... yield 'a'
...
>>> a = list(f())
>>> a[0]
'a'
>>> len(a)
1

Use this to wrap your generator:
class GeneratorWrap(object):
def __init__(self, generator):
self.generator = generator
def __iter__(self):
return self
def next(self):
for o in self.generator:
return o
raise StopIteration # If you don't care about the iterator protocol, remove this line and the __iter__ method.
Use it like this:
def example_generator():
for i in [1,2,3,4,5]:
yield i
gen = GeneratorWrap(example_generator())
print gen.next() # prints 1
print gen.next() # prints 2
Update: Please use the answer below because it is much better than this one.

Related

return and return None in a generator: PEP guidelines

According to PEP 8 we should be consistent in our function declarations and ensure that they all have the same return-pattern, i.e. all should return an expression or all should not. However, I am not sure how to apply this to generators.
A generator will yield values as long as the code reaches them, unless a return statement is encountered in which case it will stop the iteration. However, I don't see any use-case in which returning a value from a generator function can happen. In that spirit, I don't see why it is useful - from a PEP 8 perspective - to end such a function with the explicit return None. In other words, why do we ought to verbalize a return statement for generators if the return expression is only reached when the yield'ing is over?
Example: in the following code, I don't see how hello() can be used to assign 100 to a variable (thus using the return statement). So why does PEP 8 expect us to write a return statement (be it 100 or None).
def hello():
for i in range(5):
yield i
return 100
h = [x for x in hello()]
g = hello()
print(h)
# [0, 1, 2, 3, 4]
print(g)
# <generator object hello at 0x7fd2f285a7d8>
# can we ever get 100?
You have misread PEP8. PEP8 states:
Be consistent in return statements. Either all return statements in a function should return an expression, or none of them should.
(bold emphasis mine)
You should be consistent with how you use return within a single function, not across your whole project.
Use return, it's the only return statement in the function.
However, I don't see any use-case in which returning a value from a generator function can happen.
The return value of a generator is attached to the StopIteration exception raised:
>>> def gen():
... if False: yield
... return 'Return value'
...
>>> try:
... next(gen())
... except StopIteration as ex:
... print(ex.value)
...
Return value
And this is also the mechanism by which yield from produces a value; the return value of yield from is the value attribute on the StopIteration exception. A generator can thus return a result to code using result = yield from generator by using return result:
>>> def bar():
... result = yield from gen()
... print('gen() returned', result)
...
>>> next(bar(), None)
gen() returned Return value
This feature is used in the Python standard library; e.g. in the asyncio library the value of StopIteration is used to pass along Task results, and the #coroutine decorator uses res = yield from ... to run a wrapped generator or awaitable and pass through the return value.
So, from a PEP-8 point of view, for generators and there are two possibilities:
You are using return to exit the generator early, say in a loop with if. Use return, no need to add None:
def foo():
while bar:
yield ham
if spam:
return
You are using return <something> to exit and set StopIteration.value. Use return <something> consistently throughout your generator, even when returning None:
def foo():
for bar in baz:
yield bar
if spam:
return 'The bar bazzed the spam'
return None

Optional yield or return in python3. How to?

I would like to have a function that can, optionally, return or yield the result.
Here is an example.
def f(option=True):
...
for...:
if option:
yield result
else:
results.append(result)
if not option:
return results
Of course, this doesn't work, I have tried with python3 and I always get a generator no matter what option value I set.
As far I have understood, python checks the body of the function and if a yield is present, then the result will be a generator.
Is there any way to get around this and make a function that can return or yield at will?
You can't. Any use of yield makes the function a generator.
You could wrap your function with one that uses list() to store all values the generator produces in a list object and returns that:
def f_wrapper(option=True):
gen = f()
if option:
return gen # return the generator unchanged
return list(gen) # return all values of the generator as a list
However, generally speaking, this is bad design. Don't have your functions alter behaviour like this; stick to one return type (a generator or an object) and don't have it switch between the two.
Consider splitting this into two functions instead:
def f():
yield result
def f_as_list():
return list(f())
and use either f() if you need the generator, and f_as_list() if you want to have a list instead.
Since list(), (and next() to access just one value of a generator) are built-in functions, you rarely need to use a wrapper. Just call those functions directly:
# access elements one by one
gen = f()
one_value = next(gen)
# convert the generator to a list
all_values = list(f())
What about this?
def make_f_or_generator(option):
def f():
return "I am a function."
def g():
yield "I am a generator."
if option:
return f
else:
return g
This gives you at least the choice to create a function or a generator.
class based approach
class FunctionAndGenerator:
def __init__(self):
self.counter = 0
def __iter__(self):
return self
# You need a variable to indicate if dunder next should return the string or raise StopIteration.
# Raising StopIteration will stop the loop from iterating more.
# You'll have to teach next to raise StopIteration at some point
def __next__(self):
self.counter += 1
if self.counter > 1 :
raise StopIteration
return f"I'm a generator and I've generated {self.counter} times"
def __call__(self):
return "I'm a function"
x = FunctionAndGenerator()
print(x())
for i in x:
print(i)
I'm a function
I'm a generator and I've generated 1 times
[Program finished]

How to write foldr (right fold) generator in Python?

Python's reduce is a left-fold, which means it is tail-recursive and its uses can be neatly rewritten as a loop. However, Python does not have a built-in function for doing right folds. Since right-folds are most naturally written with recursion (and Python doesn't like recursion as much as functional languages), I'm interested in writing a right fold (foldr) in terms of a generator.
How can this be done? And very specifically, how can it be done in Python 2.7?
EDIT: I should have mentioned that one of the benefits to foldr is that you can sometimes fold on infinite lists without risk of eating your stack alive. I would like to see answers that preserve this property.
For example, Haskell's foldr is lazy on both input and output and can allow for short-circuiting "step" functions to work on long/infinite inputs:
foldr (&&) True (repeat False) -- gives False
Any Python variant that uses list/reversed/etc. on the input will hang if given itertools.repeat(some_value).
Note that Python's reduce chokes in the same example because of strictness:
reduce(lambda x, y: x and y, itertools.repeat(False), True) # hangs
So a simple generator in python (without appropriate error checking):
def foldr(op, lst):
l, x = reversed(list(lst)), None
for i in l:
if not x:
x = i
continue
x = op(x, i)
yield x
e.g.:
>>> from operator import mul
>>> for i in foldr(mul, [1,2,3,4]):
... print i
24
24
12
Almost identical to the 'roughly equivalent' implementation of reduce in the documentation:
def foldr(function, iterable, initializer=None):
it = reversed(list(iterable))
if initializer is None:
try:
initializer = next(it)
except StopIteration:
raise TypeError('foldr() of empty sequence with no initial value')
accum_value = initializer
for x in it:
accum_value = function(accum_value, x)
yield accum_value
[Edit]
So purely as an exercise of the mind and with very little practical value, it is possible to defer as long as there is some cooperation between the function that you a folding over... e.g.:
class Defer(object):
def __init__(self, func, *args):
self.func = func
self.args = args
def __bool__(self):
return self.func(*self.args)
def __int__(self):
return self.func(*self.args)
def foldr(function, iterable, initializer):
it = iter(iterable)
try:
return function(next(it), Defer(foldr, function, it, initializer))
except StopIteration:
return initializer
Then as long as the function converts to the right type you can defer the calculation, however this will not work with native operators, so not sure how useful this really is:
>>> print(foldr(lambda a, b: int(a)*int(b), [1,2,3,4], 1))
24
Defining a forever generator:
from itertools import repeat
def forever():
yield False
yield True
for i in repeat(False):
yield i
Folding or across an infinite list, returns when it finds a True
>>> print(foldr(lambda a, b: bool(a) or bool(b), forever(), False))
True
You will have to catch appropriate exceptions but should be an idea of how to do it iteratively:
def foldr(a, b, l):
if isinstance(l, Iterator):
it = reversed(list(l))
else:
it = reversed(l)
try:
nxt = next(it)
except StopIteration:
return
c = a(nxt, b)
stop = object()
while nxt is not stop:
yield c
nxt = next(it, stop)
c = a(nxt, c) if nxt is not stop else c
from operator import truediv
for c in (foldr(truediv, 1, [1, 2, 3, 4, 5, 6, 7, 8])):
print(c)
If you are going to define a function using generators, why not use the following?
def foldr(op, lst):
return reduce(op, reversed(lst))
I think something like this is what you want:
def foldr(fn, seq, init):
it = iter(seq)
try:
x = next(it)
except StopIteration:
try:
for elem in init:
yield elem
except TypeError:
yield init
else:
try:
for elem in fn(x, foldr(fn, it, init)):
yield elem
except TypeError:
yield fn(x, foldr(fn, it, init))
It's not exactly production-ready since it will hit the Python stack limit pretty quickly and it will be surprising in the presence of side-effecting functions due to the double call to fn, but it should be enough to give you an idea.

Return in generator together with yield

In Python 2 there was an error when return was together with yield in a function definition. But for this code in Python 3.3:
def f():
return 3
yield 2
x = f()
print(x.__next__())
there is no error that return is used in function with yield. However when the function __next__ is called then there is thrown exception StopIteration. Why there is not just returned value 3? Is this return somehow ignored?
This is a new feature in Python 3.3. Much like return in a generator has long been equivalent to raise StopIteration(), return <something> in a generator is now equivalent to raise StopIteration(<something>). For that reason, the exception you're seeing should be printed as StopIteration: 3, and the value is accessible through the attribute value on the exception object. If the generator is delegated to using the (also new) yield from syntax, it is the result. See PEP 380 for details.
def f():
return 1
yield 2
def g():
x = yield from f()
print(x)
# g is still a generator so we need to iterate to run it:
for _ in g():
pass
This prints 1, but not 2.
The return value is not ignored, but generators only yield values, a return just ends the generator, in this case early. Advancing the generator never reaches the yield statement in that case.
Whenever a iterator reaches the 'end' of the values to yield, a StopIteration must be raised. Generators are no exception. As of Python 3.3 however, any return expression becomes the value of the exception:
>>> def gen():
... return 3
... yield 2
...
>>> try:
... next(gen())
... except StopIteration as ex:
... e = ex
...
>>> e
StopIteration(3,)
>>> e.value
3
Use the next() function to advance iterators, instead of calling .__next__() directly:
print(next(x))

i don't know __iter__ in python,who can give me a good code example

my code run wrong
class a(object):
def __iter(self):
return 33
b={'a':'aaa','b':'bbb'}
c=a()
print b.itervalues()
print c.itervalues()
Please try to use the code, rather than text, because my English is not very good, thank you
a. Spell it right: not
def __iter(self):
but:
def __iter__(self):
with __ before and after iter.
b. Make the body right: not
return 33
but:
yield 33
or
return iter([33])
If you return a value from __iter__, return an iterator (an iterable, as in return [33], is almost as good but not quite...); or else, yield 1+ values, making __iter__ into a generator function (so it intrinsically returns a generator iterator).
c. Call it right: not
a().itervalues()
but, e.g.:
for x in a(): print x
or
print list(a())
itervalues is a method of dict, and has nothing to do with __iter__.
If you fix all three (!) mistakes, the code works better;-).
A few things about your code:
__iter should be __iter__
You're returning '33' in the __iter__ function. You should actually be returning an iterator object. An iterator is an object which keeps returning different values when it's next() function is called (maybe a sequence of values like [0,1,2,3 etc]).
Here's a working example of an iterator:
class a(object):
def __init__(self,x=10):
self.x = x
def __iter__(self):
return self
def next(self):
if self.x > 0:
self.x-=1
return self.x
else:
raise StopIteration
c=a()
for x in c:
print x
Any object of class a is an iterator object. Calling the __iter__ function is supposed to return the iterator, so it returns itself – as you can see, the a class has a next() function, so this is an iterator object.
When the next function is called, it keeps return consecutive values until it hits zero, and then it sends the StopIteration exception, which (appropriately) stops the iteration.
If this seems a little hazy, I would suggest experimenting with the code and then checking out the documentation here: http://docs.python.org/library/stdtypes.html
Here is a code example that implements the xrange builtin:
class my_xrange(object):
def __init__(self, start, end, skip=1):
self.curval = int(start)
self.lastval = int(end)
self.skip = int(skip)
assert(int(skip) != 0)
def __iter__(self):
return self
def next(self):
if (self.skip > 0) and (self.curval >= self.lastval):
raise StopIteration()
elif (self.skip < 0) and (self.curval <= self.lastval):
raise StopIteration()
else:
oldval = self.curval
self.curval += self.skip
return oldval
for i in my_xrange(0, 10):
print i
You are using this language feature incorrectly.
http://docs.python.org/library/stdtypes.html#iterator-types
This above link will explain what the function should be used for.
You can try to see documentation in your native language here: http://wiki.python.org/moin/Languages

Categories

Resources