How would one create an iterative function (or iterator object) in python?
Iterator objects in python conform to the iterator protocol, which basically means they provide two methods: __iter__() and __next__().
The __iter__ returns the iterator object and is implicitly called
at the start of loops.
The __next__() method returns the next value and is implicitly called at each loop increment. This method raises a StopIteration exception when there are no more value to return, which is implicitly captured by looping constructs to stop iterating.
Here's a simple example of a counter:
class Counter:
def __init__(self, low, high):
self.current = low - 1
self.high = high
def __iter__(self):
return self
def __next__(self): # Python 2: def next(self)
self.current += 1
if self.current < self.high:
return self.current
raise StopIteration
for c in Counter(3, 9):
print(c)
This will print:
3
4
5
6
7
8
This is easier to write using a generator, as covered in a previous answer:
def counter(low, high):
current = low
while current < high:
yield current
current += 1
for c in counter(3, 9):
print(c)
The printed output will be the same. Under the hood, the generator object supports the iterator protocol and does something roughly similar to the class Counter.
David Mertz's article, Iterators and Simple Generators, is a pretty good introduction.
There are four ways to build an iterative function:
create a generator (uses the yield keyword)
use a generator expression (genexp)
create an iterator (defines __iter__ and __next__ (or next in Python 2.x))
create a class that Python can iterate over on its own (defines __getitem__)
Examples:
# generator
def uc_gen(text):
for char in text.upper():
yield char
# generator expression
def uc_genexp(text):
return (char for char in text.upper())
# iterator protocol
class uc_iter():
def __init__(self, text):
self.text = text.upper()
self.index = 0
def __iter__(self):
return self
def __next__(self):
try:
result = self.text[self.index]
except IndexError:
raise StopIteration
self.index += 1
return result
# getitem method
class uc_getitem():
def __init__(self, text):
self.text = text.upper()
def __getitem__(self, index):
return self.text[index]
To see all four methods in action:
for iterator in uc_gen, uc_genexp, uc_iter, uc_getitem:
for ch in iterator('abcde'):
print(ch, end=' ')
print()
Which results in:
A B C D E
A B C D E
A B C D E
A B C D E
Note:
The two generator types (uc_gen and uc_genexp) cannot be reversed(); the plain iterator (uc_iter) would need the __reversed__ magic method (which, according to the docs, must return a new iterator, but returning self works (at least in CPython)); and the getitem iteratable (uc_getitem) must have the __len__ magic method:
# for uc_iter we add __reversed__ and update __next__
def __reversed__(self):
self.index = -1
return self
def __next__(self):
try:
result = self.text[self.index]
except IndexError:
raise StopIteration
self.index += -1 if self.index < 0 else +1
return result
# for uc_getitem
def __len__(self)
return len(self.text)
To answer Colonel Panic's secondary question about an infinite lazily evaluated iterator, here are those examples, using each of the four methods above:
# generator
def even_gen():
result = 0
while True:
yield result
result += 2
# generator expression
def even_genexp():
return (num for num in even_gen()) # or even_iter or even_getitem
# not much value under these circumstances
# iterator protocol
class even_iter():
def __init__(self):
self.value = 0
def __iter__(self):
return self
def __next__(self):
next_value = self.value
self.value += 2
return next_value
# getitem method
class even_getitem():
def __getitem__(self, index):
return index * 2
import random
for iterator in even_gen, even_genexp, even_iter, even_getitem:
limit = random.randint(15, 30)
count = 0
for even in iterator():
print even,
count += 1
if count >= limit:
break
print
Which results in (at least for my sample run):
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
How to choose which one to use? This is mostly a matter of taste. The two methods I see most often are generators and the iterator protocol, as well as a hybrid (__iter__ returning a generator).
Generator expressions are useful for replacing list comprehensions (they are lazy and so can save on resources).
If one needs compatibility with earlier Python 2.x versions use __getitem__.
I see some of you doing return self in __iter__. I just wanted to note that __iter__ itself can be a generator (thus removing the need for __next__ and raising StopIteration exceptions)
class range:
def __init__(self,a,b):
self.a = a
self.b = b
def __iter__(self):
i = self.a
while i < self.b:
yield i
i+=1
Of course here one might as well directly make a generator, but for more complex classes it can be useful.
First of all the itertools module is incredibly useful for all sorts of cases in which an iterator would be useful, but here is all you need to create an iterator in python:
yield
Isn't that cool? Yield can be used to replace a normal return in a function. It returns the object just the same, but instead of destroying state and exiting, it saves state for when you want to execute the next iteration. Here is an example of it in action pulled directly from the itertools function list:
def count(n=0):
while True:
yield n
n += 1
As stated in the functions description (it's the count() function from the itertools module...) , it produces an iterator that returns consecutive integers starting with n.
Generator expressions are a whole other can of worms (awesome worms!). They may be used in place of a List Comprehension to save memory (list comprehensions create a list in memory that is destroyed after use if not assigned to a variable, but generator expressions can create a Generator Object... which is a fancy way of saying Iterator). Here is an example of a generator expression definition:
gen = (n for n in xrange(0,11))
This is very similar to our iterator definition above except the full range is predetermined to be between 0 and 10.
I just found xrange() (suprised I hadn't seen it before...) and added it to the above example. xrange() is an iterable version of range() which has the advantage of not prebuilding the list. It would be very useful if you had a giant corpus of data to iterate over and only had so much memory to do it in.
This question is about iterable objects, not about iterators. In Python, sequences are iterable too so one way to make an iterable class is to make it behave like a sequence, i.e. give it __getitem__ and __len__ methods. I have tested this on Python 2 and 3.
class CustomRange:
def __init__(self, low, high):
self.low = low
self.high = high
def __getitem__(self, item):
if item >= len(self):
raise IndexError("CustomRange index out of range")
return self.low + item
def __len__(self):
return self.high - self.low
cr = CustomRange(0, 10)
for i in cr:
print(i)
If you looking for something short and simple, maybe it will be enough for you:
class A(object):
def __init__(self, l):
self.data = l
def __iter__(self):
return iter(self.data)
example of usage:
In [3]: a = A([2,3,4])
In [4]: [i for i in a]
Out[4]: [2, 3, 4]
All answers on this page are really great for a complex object. But for those containing builtin iterable types as attributes, like str, list, set or dict, or any implementation of collections.Iterable, you can omit certain things in your class.
class Test(object):
def __init__(self, string):
self.string = string
def __iter__(self):
# since your string is already iterable
return (ch for ch in self.string)
# or simply
return self.string.__iter__()
# also
return iter(self.string)
It can be used like:
for x in Test("abcde"):
print(x)
# prints
# a
# b
# c
# d
# e
Include the following code in your class code.
def __iter__(self):
for x in self.iterable:
yield x
Make sure that you replace self.iterablewith the iterable which you iterate through.
Here's an example code
class someClass:
def __init__(self,list):
self.list = list
def __iter__(self):
for x in self.list:
yield x
var = someClass([1,2,3,4,5])
for num in var:
print(num)
Output
1
2
3
4
5
Note: Since strings are also iterable, they can also be used as an argument for the class
foo = someClass("Python")
for x in foo:
print(x)
Output
P
y
t
h
o
n
This is an iterable function without yield. It make use of the iter function and a closure which keeps it's state in a mutable (list) in the enclosing scope for python 2.
def count(low, high):
counter = [0]
def tmp():
val = low + counter[0]
if val < high:
counter[0] += 1
return val
return None
return iter(tmp, None)
For Python 3, closure state is kept in an immutable in the enclosing scope and nonlocal is used in local scope to update the state variable.
def count(low, high):
counter = 0
def tmp():
nonlocal counter
val = low + counter
if val < high:
counter += 1
return val
return None
return iter(tmp, None)
Test;
for i in count(1,10):
print(i)
1
2
3
4
5
6
7
8
9
class uc_iter():
def __init__(self):
self.value = 0
def __iter__(self):
return self
def __next__(self):
next_value = self.value
self.value += 2
return next_value
Improving previous answer, one of the advantage of using class is that you can add __call__ to return self.value or even next_value.
class uc_iter():
def __init__(self):
self.value = 0
def __iter__(self):
return self
def __next__(self):
next_value = self.value
self.value += 2
return next_value
def __call__(self):
next_value = self.value
self.value += 2
return next_value
c = uc_iter()
print([c() for _ in range(10)])
print([next(c) for _ in range(5)])
# [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
# [20, 22, 24, 26, 28]
Other example of a class based on Python Random that can be both called and iterated could be seen on my implementation here
Related
If I understand properly, we in Python we have:
Iterables = __iter__() is implemented
Iterators = __iter__() returns self & __next__() is implemented
Generators = an iterator created with a yield statement or a generator expression.
Question: Are there categories above that are always/never consumable?
By consumable I mean iterating through them "destroys" the iterable; like zip() (consumable) vs range() (not consumable).
All iterators are consumed; the reason you might not think so is that when you use an iterable with something like
for x in [1,2,3]:
the for loop is creating a new iterator for you behind the scenes. In fact, a list is not an iterator; iter([1,2,3]) returns something of type list_iterator, not the list itself.
Regarding the example you linked to in a comment, instead of
class PowTwo:
def __init__(self, max=0):
self.max = max
def __iter__(self):
self.n = 0
return self
def __next__(self):
if self.n <= self.max:
result = 2 ** self.n
self.n += 1
return result
else:
raise StopIteration
which has the side effect of modifying the iterator in the act of returning it, I would do something like
class PowTwoIterator:
def __init__(self, max=0):
self.max = max
self._restart()
def _restart(self):
self._n = 0
def __iter__(self):
return self
def __next__(self):
if self._n <= self.max:
result = 2 ** self._n
self._n += 1
return result
else:
raise StopIteration
Now, the only way you can modify the state of the object is to do so explicitly (and even that should not be done lightly, since both _n and _restart are marked as not being part of the public interface).
The change in the name reminds you that this is first and foremost an iterator, not an iterable that can provide independent iterators from.
Using iter in our for loop makes programming an efficient one in python. How does it actually works?
Tried to visualize iter(iterables) in "http://www.pythontutor.com/visualize.html#mode=display". Here iter helps to create an instance.
Doesn't it actually refer to internal numerical objects?
val = [1,2,3,4,5]
val = iter(val)
for item in val:
print(item)
val = [1,2,3,4,5]
for item in val:
print(item)
Both returns same output. But how iter identifies the values?
What you're doing is redundant. A for loop is essentially syntactic sugar for:
val = [1,2,3,4,5]
iterator = iter(val)
while True:
try:
item = next(iterator)
except StopIteration:
break
print(item)
All that you're doing by calling iter() on your val is replacing
iterator = iter(val)
with
iterator = iter(iter(val))
Calling iter() on an iterator is a no-op; returning the same iterator.
Lets pretend that Python lists were not iterable and we wanted to create a class, MyList, that could be constructed with a list instance and iterate it. MyList would need to implement a __iter__ method that would return an iterator object that implements the __next__ method. Each successive call to this __next__ method should return the next element of the list. When there are no more elements to be returned, it needs to raise a StopIteration exception. This iterator object clearly needs two pieces of information:
A reference to the list being iterated.
The index of the last element that was output.
Let's call this iterator class MyListIterator. Its implementation might be:
class MyListIterator:
def __init__(self, l):
self.l = l # the list being iterated
self.index = -1 # the index of the last element outputted
def __next__(self):
self.index += 1 # next index to output
if self.index < len(self.l):
return self.l[self.index]
raise StopIteration
Class MyList then would be:
class MyList:
def __init__(self, the_list):
self.l = the_list # we are constructed with a list
def __iter__(self):
return MyListIterator(self.l) # pass to the iterator our list
An example of use:
l = MyList([0, 1, 2])
for i in l:
for j in l:
print (i, j)
Prints:
0 0
0 1
0 2
1 0
1 1
1 2
2 0
2 1
2 2
I have a while loop that operates on outputs provided by another class, until no outputs are left.
while a.is_next():
fn(a.get_next())
Is there a way of checking if a new item exists and. "loading" it at the same time?
while b=a.get_next():
fn(b)
It looks like you're trying to reinvent the iterator. Iterators must have two methods: an __iter__ method that returns the iterator itself and a __next__ method that returns either the next item or raises StopIteration. For example
class MyIterator:
def __init__(self):
self.list = [1, 2, 3]
self.index = 0
def __iter__(self):
return self
def __next__(self):
try:
ret = self.list[self.index]
self.index += 1
return ret
except IndexError:
raise StopIteration
That's a lot for that example, but it allows us to use that iterator everywhere Python expects an iterator
for x in MyIterator():
print(x)
1
2
3
Not sure why you want this but you can assign and check if exists in the same statement like:
import itertools as it
for b in (x.get_next() for x in it.repeat(a) if x.is_next()):
fn(b)
Is there a way of checking if a new item exists and. "loading" it at the same time?
The short answer is no. Python assignments cannot be done in the place of a while loop's conditional statement. However, why not simply reassign the value of a.get_next() to a variable each iteration, and use that as your loops conditional:
b = a.get_next() # get the initial value of b
while b:
fn(b)
b = a.get_next() # get the next value for b. If b is 'fasly', the loop will end.
Search for generators, iterators and yield statement.
Code example
class Container:
def __init__(self,l):
self.l = l
def next(self):
i = 0
while (i < len(self.l)):
yield self.l[i]
i += 1
c = Container([1,2,3,4,5])
for item in c.next():
print(item, end=" ") # 1 2 3 4 5
I am designing a custom iterator in python:
class Iterator():
def __init__(self):
return
def fit(self, n):
self.n = n
return self
def __iter__(self):
for i in range(self.n):
yield i
return
it = Iterator().fit(10)
for i in it:
print(i)
it.fit(20)
for i in it:
print(i)
It is working fine but I am wondering if it is possible that a new fit is called before that the previous one is finished leading to strange behaviour of the class.
If yes how should I design it to make it more robust?
It is important to have some parameters passed from the fit method.
EDIT: I will introduce an example that is similar to my original problem
The iterator class is designed to be used by a User class. It is important that when the evaluate method is called all the numbers until n/k are printed. Without any exception.
Maybe the use of a iterator.fit(n) method solves the problem?
class Iterator():
def __init__(self, k):
self.k = k
return
def fit(self, n):
for i in range(int(n/self.k)):
yield i
return
class User():
def __init__(self, iterator):
self.iterator = iterator
return
def evaluate(self, n):
for i in self.iterator.fit(n):
print(i)
return
it = Iterator(2)
u = User(it)
u.evaluate(10) # I want to be sure that all the numbers until 9 are printed
u.evaluate(20) # I want to be sure that all the numbers until 20 are printed
Because each call to range creates a new iterator, there will be no conflicts if you make multiple calls to fit.
Your class is a bit weird. You could either remove the __init__, as it does nothing, or put the fit method in there.
it = Iterator()
it1 = iter(it.fit(10))
it2= iter(it.fit(5))
print it1.next()
print it1.next()
print it2.next()
print it1.next()
>>0
1
0
2
You haven't actually written an iterator -- you've written a normal class that can return an iterator. The iterator that you are returning is a generator.
What this means is that calling fit() during iteration will have no effect -- at least, not until you iterate over your object again. For example:
>>> it = Iterator()
>>> for x in it.fit(7):
... it.fit(3)
... print(x)
...
0
1
2
3
4
5
6
>>> for x in it:
... print(x)
...
0
1
2
Can someone explain the syntax on lines 5 & 16
1 # Using the generator pattern (an iterable)
2 class firstn(object):
3 def __init__(self, n):
4 self.n = n
5 self.num, self.nums = 0, []
6
7 def __iter__(self):
8 return self
9
10 # Python 3 compatibility
11 def __next__(self):
12 return self.next()
13
14 def next(self):
15 if self.num < self.n:
16 cur, self.num = self.num, self.num+1
17 return cur
18 else:
19 raise StopIteration()
20
21 sum_of_first_n = sum(firstn(1000000))
That's tuple assignment; you can assign to multiple targets.
The right-hand expression is evaluated first, and then each value in that sequence is assigned to the names on left-hand side one by one, from left to right.
Thus, self.num, self.nums = 0, [] assigns 0 to self.num and [] to self.nums.
See the assigment statements documentation:
If the target list is a comma-separated list of targets: The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets.
Because the right-hand side portion is executed first, the line cur, self.num = self.num, self.num+1 assigns self.num to cur after calculating self.num + 1, which is assigned to self.num. If self.num was 5 before that line, then after that line cur is 5, and self.num is 6.
self.num, self.nums = 0, []
cur, self.num = self.num, self.num+1
These are shorthands for the following:
self.num = 0
self.nums = []
and
cur = self.num
self.num = self.num + 1
As a personal preference, I would not use the compound assignments in either of these two lines. The assignments are not related, so there's no reason to combine them.
There are times when compound assignments can prove useful. Consider how one swaps two numbers in languages like C and Java:
temp = a
a = b
b = temp
In Python we can eliminate the temporary variable!
a, b = b, a