I was having trouble with a project and was later able to successfully complete it. However, while running through some code written by someone else, I noticed they were able to utilize an iterator (for loop) within the join-function.
example:
' '.join(x for x in name.split('*'))
I thought this was awesome as it helped me cut down lines of code from my original draft.
So my question is: Are there any documents that have a list of functions that accept iterators?
I could be mistaken here, but I think what you mean by iterator is in fact called a list comprehension in python. It's not that the list comprehension in question does not return an iterable, but it seems that you are impressed not with the fact that you could pass an iterable to the join function, but instead that the fact that you could put what seems to be flow control inline. Again, tell me if I'm wrong about this.
List comprehensions can be in the form of tuples (returns a generator) or lists (returns a list). To see the difference between these two, type the following in a python shell:
>>> (x for x in 'cool')
<generator object <genexpr> at 0x03980990>
>>> [x for x in 'cool']
['c', 'o', 'o', 'l']
I would imagine it is obvious how you can work with a list, but if you want to learn more about how generators work, you might want to check this out.
Also, the fun doesn't end there with list comprehensions. The possibilities are endless.
>>> [x for x in [1,5,4,7,8,2,6,3] if x > 3]
[5, 4, 7, 8, 6]
>>> [(x,y) for x in range(3) for y in range(3)]
[(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)]
To learn more about list comprehensions in general, try here.
They're called generators, and they work in many places that accept lists or tuples. The generic term for all three is iterable. But it depends on what the code in question does. If it just iterates then a generator will work. If it tries to get the len() or access items by index, it won't.
There isn't a list of functions that accept generators or iterables, no; nobody organizes documentation that way.
Technically, the argument to str.join() in your example is called a "generator expression". A generator expression evals to an iterable object
- note that an iterable is not necessarily an iterator (but iterators are iterable).
I assume your question really was about "functions that accept generator expressions". If yes, the answer is above: any function that expects an iterable, since arguments are eval'd before being passed so the generator expression is turned into an iterable before the function is actually called.
Note that there's a distinction to be made between iterables and "sequences types" (strings, tuples, lists, sets etc): the later are indeed iterable, but they have some other specifities too (ie they usually have a length, can be iterated more than once etc) so not all functions expecting a sequence will work with non-sequence iterators. But this is usually documented.
Related
I'm trying to get the Cartesian product of multiple arrays but the arrays are pretty large and I am trying to optimize memory usage. I have tried implementing a generator by using the below code but it just returns that there is a generator at a certain location.
import itertools
x = [[1,2],[3,4]]
def iter_tools(*array):
yield list(itertools.product(*array))
print(iter_tools(*x))
When I try the same code but with return instead of yield it works fine. How could I get the cartesian product by implementing a generator?
Bottom line, itertools.product is already an iterator. You don't need to write your own. (A generator is a kind of iterator.) For example:
>>> x = [[1, 2], [3, 4]]
>>> p = itertools.product(*x)
>>> next(p)
(1, 3)
>>> next(p)
(1, 4)
Now, to explain, it seems like you're misunderstanding something fundamental. A generator function returns a generator iterator. That's what you're seeing from the print:
>>> iter_tools(*x)
<generator object iter_tools at 0x7f05d9bc3660>
Use list() to cast an iterator to a list.
>>> list(iter_tools(*x))
[[(1, 3), (1, 4), (2, 3), (2, 4)]]
Note how it's a nested list. That's because your iter_tools yields one list then nothing else. On that note, that part makes no sense because casting itertools.product to a list defeats the whole purpose of an iterator - lazy evaluation. If you actually wanted to yield the values from an iterator, you would use yield from:
def iter_tools(*array):
yield from itertools.product(*array)
In this case iter_tools is pointless, but if your actual iter_tools is more complex, this might be what you actually want.
See also:
what's the difference between yield from and yield in python 3.3.2+
How to Use Generators and yield in Python - Real Python
This answer is partly based on juanpa.arrivillaga's comment
The idea of a generator is that you don't do all the calculation at the same time, as you do with your call list(itertools.product(*array)). So what you want to do is generate the results one by one. For example like this:
def iter_tools(*array):
for i in array[0]:
for j in array[1]:
yield (i, j)
You can then do something with each resulting tuple like this:
for tup in iter_tools(*x):
print(tup)
Of course you can easily adapt the generator so that it yields each row or columns per call.
Or if you are happy with what itertools provides:
for i in itertools.product(*x):
print(i)
What you need depends on your use-case. Hope I could help you :)
If you want to yield individual item from the cartesian product, you need to iterate over the product:
import itertools
x = [[1,2],[3,4]]
def iter_tools(*array):
for a in itertools.product(*array):
yield a
for a in iter_tools(*x):
print(a)
I am having a problem understanding why one of the following line returns generator and another tuple.
How exactly and why a generator is created in the second line, while in the third one a tuple is produced?
sample_list = [1, 2, 3, 4]
generator = (i for i in sample_list)
tuple_ = (1, 2, 3, 4)
print type(generator)
<type 'generator'>
print type(tuple_)
<type 'tuple'>
Is it because tuple is immutable object and when I try to unpack list inside (), it can't create the tuple as it has to change the tuple tuple.
You can imagine tuples as being created when you hardcode the values, while generators are created where you provide a way to create the objects.
This works since there is no way (1,2,3,4) could be a generator. There is nothing to generate there, you just specified all the elements, not a rule to obtain them.
In order for your generator to be a tuple, the expression (i for i in sample_list) would have to be a tuple comprehension. There is no way to have tuple comprehensions, since comprehensions require a mutable data type.
Thus, the syntax for what should have been a tuple comprehension has been reused for generators.
Parentheses are used for three different things: grouping, tuple literals, and function calls. Compare (1 + 2) (an integer) and (1, 2) (a tuple). In the generator assignment, the parentheses are for grouping; in the tuple assignment, the parentheses are a tuple literal. Parentheses represent a tuple literal when they contain a comma and are not used for a function call.
I need help understanding a homework assignment that has been giving me a TON of trouble. I have attempted many different methods to get the following assignment to produce the desired result:
Create a module named task_05.py
Create a function named flip_keys() that takes one argument:
a. A list named to_flip. This list is assumed to have nested, immutable sequences inside it, eg: [(1, 2, 3), 'hello']
Use a for loop to loop the list and reverse the order of the inner sequence. All operations on the outer list must operate on the
original object, taking advantage of its mutability. Inner elements
are immutable and will require replacement.
The function should return the original list with its inner elements reversed.
My professor will evaluate the result of my script by imputing the following into python shell:
>>> LIST = [(1, 2, 3), 'abc']
>>> NEW = flip_keys(LIST)
>>> LIST is NEW
True
>>> print LIST
[(3, 2, 1), 'cba']
I don't know what I'm doing wrong and my professor hasn't responded. Students also havent responded and I have reviewed the material multiple times to try to find the answer. Something isn't clicking in my brain.
He provided the following hints, which I believe I have implimented in my script:
Hint
Consider how to access or change the value of a list. You did it
already in task 2!
Hint
In order to change the value in to_flip you'll need some way to know
which index you're attempting to change. To do-this, create a variable
to act as a counter and increment it within your loop, eg:
counter = 0 for value in iterable_object:
do something counter += 1 Now consider what that counter could represent. At the end of this loop does counter ==
len(iterable_object)
Hint
For an idea on how to reverse a tuple, head back to an earlier
assignment when you reversed a string using the slice syntax.
Heres my latest script without comments (because I don't write them in until the script works):
def flip_keys(to_flip):
for loop_list in to_flip:
to_flip = [to_flip[0][::-1], to_flip[1][::-1]]
return to_flip
When I test the script using the commands pasted above, I get these results:
>>>LIST = [(1, 2, 3), 'abc']
>>>NEW = flip_keys(LIST)
>>>LIST is NEW
False
>>>print flip_keys(LIST)
[(3, 2, 1), 'cba']
>>>print LIST
[(1, 2, 3), 'abc']
The goal of the assignment is to experiement with mutability, which I think I understand. The problem I'm facing is that the LIST variable is suppose to be updated by the function, but this never happens.
The following is supposed to evaluate to True, not false. And then print the reversed list value stored in the LIST constant.
>>>LIST = [(1, 2, 3), 'abc']
>>>NEW = flip_keys(LIST)
>>>LIST is NEW False
please let me know if this is enough information. I have spent way too much time on this and at this point my assignment is 4 days late and I'm receiving no support from professor or students (I've informed my adviser).
You are returning a new list from your function. Use a full slice [:] assignment to do an in-place mutation of the original list.
You can also use the more conventional way for creating lists - list comprehension - instead of the for loop:
def flip_keys(to_flip):
to_flip[:] = [i[::-1] for i in to_flip]
return to_flip
Test
>>> LIST = [(1, 2, 3), 'abc']
>>> NEW = flip_keys(LIST)
>>> NEW
[(3, 2, 1), 'cba']
>>> NEW is LIST
True
IMO, mutating a mutable argument and returning it doesn't feel right/conventional. This could make for a good discussion in your next class.
I was reading through some older code of mine and came across this line
itertools.starmap(lambda x,y: x + (y,),
itertools.izip(itertools.repeat(some_tuple,
len(list_of_tuples)),
itertools.imap(lambda x: x[0],
list_of_tuples)))
To be clear, I have some list_of_tuples from which I want to get the first item out of each tuple (the itertools.imap), I have another tuple that I want to repeat (itertools.repeat) such that there is a copy for each tuple in list_of_tuples, and then I want to get new, longer tuples based on the items from list_of_tuples (itertools.starmap).
For example, suppose some_tuple = (1, 2, 3) and list_of_tuples = [(1, other_info), (5, other), (8, 12)]. I want something like [(1, 2, 3, 1), (1, 2, 3, 5), (1, 2, 3, 8)]. This isn't the exact IO (it uses some pretty irrelevant and complex classes) and my actual lists and tuples are very big.
Is there a point to nesting the iterators like this? It seems to me like each function from itertools would have to iterate over the iterator I gave it and store the information from it somewhere, meaning that there is no benefit to putting the other iterators inside of starmap. Am I just completely wrong? How does this work?
There is no reason to nest iterators. Using variables won't have a noticeable impact on performance/memory:
first_items = itertools.imap(lambda x: x[0], list_of_tuples)
repeated_tuple = itertools.repeat(some_tuple, len(list_of_tuples))
items = itertools.izip(repeated_tuple, first_items)
result = itertools.starmap(lambda x,y: x + (y,), items)
The iterator objects used and returned by itertools do not store all the items in memory, but simply calculate the next item when it is needed. You can read more about how they work here.
I don't believe the combobulation above is necessary in this case.
it appears to be equivalent to this generator expression:
(some_tuple + (y[0],) for y in list_of_tuples)
However occasionally itertools can have a performance advantage especially in cpython
According to the Python documentation, when I do range(0, 10) the output of this function is a list from 0 to 9 i.e. [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]. However the Python installation on my PC is not outputting this, despite many examples of this working online.
Here is my test code...
test_range_function = range(0, 10)
print(test_range_function)
print(type(test_range_function))
The output of this I'm thinking should be the list printed, and the type function should output it as a list. Instead I'm getting the following output...
c:\Programming>python range.py
range(0, 10)
<class 'range'>
I haven't seen this in any of the examples online and would really appreciate some light being shed on this.
That's because range and other functional-style methods, such as map, reduce, and filter, return iterators in Python 3. In Python 2 they returned lists.
What’s New In Python 3.0:
range() now behaves like xrange() used to behave, except it works with
values of arbitrary size. The latter no longer exists.
To convert an iterator to a list you can use the list function:
>>> list(range(5)) #you can use list()
[0, 1, 2, 3, 4]
Usually you do not need to materialize a range into an actual list but just want to iterate over it. So especially for larger ranges using an iterator saves memory.
For this reason range() in Python 3 returns an iterator instead (as xrange() did in Python 2). Use list(range(..)) if you want an actual list instead for some reason.
range() does not return an iterator, it is not iterator it is iterable. iterators protocol has 2 methods. __iter__ and __next__
r=range(10)
'__ iter __' in dir(r) # True
`__ next __` in dir(r) # False
iterable protocol requires __iter__ which returns an iterator.
r.__iter__()
# <range_iterator object at 0x7fae5865e030>
range() uses lazy eveluation. That means it does not precalculate and store range(10). its iterator, range_iterator, computes and returns elements one at a time. This is why when we print a range object we do not actually see the contents of the range because they don't exist yet!.