Related
If you have a list in Python 3.7:
>>> li
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
You can turn that into a list of chunks each of length n with one of two common Python idioms:
>>> n=3
>>> list(zip(*[iter(li)]*n))
[(0, 1, 2), (3, 4, 5), (6, 7, 8)]
Which drops the last incomplete tuple since (9,10) is not length n
You can also do:
>>> [li[i:i+n] for i in range(0,len(li),n)]
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
if you want the last sub list even if it has less than n elements.
Suppose now I have a generator, gen, unknown length or termination (so calling list(gen)) or sum(1 for _ in gen) would not be wise) where I want every chunk.
The best generator expression that I have been able to come up with is something along these lines:
from itertools import zip_longest
sentinel=object() # for use in filtering out ending chunks
gen=(e for e in range(22)) # fill in for the actual gen
g3=(t if sentinel not in t else tuple(filter(lambda x: x != sentinel, t)) for t in zip_longest(*[iter(gen)]*n,fillvalue=sentinel))
That works for the intended purpose:
>>> next(g3)
(0, 1, 2)
>>> next(g3)
(3, 4, 5)
>>> list(g3)
[(6, 7, 8), (9, 10)]
It just seems -- clumsy. I tried:
using islice but the lack of length seems hard to surmount;
using a sentinel in iter but the sentinel version of iter requires a callable, not an iterable.
Is there a more idiomatic Python 3 technique for a generator of chunks of length n including the last chuck that might be less than n?
I am open to a generator function as well. I am looking for something idiomatic and mostly more readable.
Update:
DSM's method in his deleted answer is very good I think:
>>> g3=(iter(lambda it=iter(gen): tuple(islice(it, n)), ()))
>>> next(g3)
(0, 1, 2)
>>> list(g3)
[(3, 4, 5), (6, 7, 8), (9, 10)]
I am open to this question being a dup but the linked question is almost 10 years old and focused on a list. There is no new method in Python 3 with generators where you don't know the length and don't want any more than a chunk at a time?
I think this is always going to be messy as long as you're trying to fit this into a one liner.
I would just bite the bullet and go with a generator function here. Especially useful if you don't know the actual size (say, if gen is an infinite generator, etc).
from itertools import islice
def chunk(gen, k):
"""Efficiently split `gen` into chunks of size `k`.
Args:
gen: Iterator to chunk.
k: Number of elements per chunk.
Yields:
Chunks as a list.
"""
while True:
chunk = [*islice(gen, 0, k)]
if chunk:
yield chunk
else:
break
>>> gen = iter(list(range(11)))
>>> list(chunk(gen))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
Someone may have a better suggestion, but this is how I'd do it.
This feels like a pretty reasonable approach that builds just on itertools.
>>> g = (i for i in range(10))
>>> g3 = takewhile(lambda x: x, (list(islice(g,3)) for _ in count(0)))
>>> list(g3)
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
I have put together some timings for the answers here.
The way I originally wrote it is actually the fastest on Python 3.7. For a one liner, that is likely the best.
A modified version of cold speed's answer is both fast and Pythonic and readable.
The other answers are all similar speed.
The benchmark:
from __future__ import print_function
try:
from itertools import zip_longest, takewhile, islice, count
except ImportError:
from itertools import takewhile, islice, count
from itertools import izip_longest as zip_longest
from collections import deque
def f1(it,k):
sentinel=object()
for t in (t if sentinel not in t else tuple(filter(lambda x: x != sentinel, t)) for t in zip_longest(*[iter(it)]*k, fillvalue=sentinel)):
yield t
def f2(it,k):
for t in (iter(lambda it=iter(it): tuple(islice(it, k)), ())):
yield t
def f3(it,k):
while True:
chunk = (*islice(it, 0, k),) # tuple(islice(it, 0, k)) if Python < 3.5
if chunk:
yield chunk
else:
break
def f4(it,k):
for t in takewhile(lambda x: x, (tuple(islice(it,k)) for _ in count(0))):
yield t
if __name__=='__main__':
import timeit
def tf(f, k, x):
data=(y for y in range(x))
return deque(f(data, k), maxlen=3)
k=3
for f in (f1,f2,f3,f4):
print(f.__name__, tf(f,k,100000))
for case, x in (('small',10000),('med',100000),('large',1000000)):
print("Case {}, {:,} x {}".format(case,x,k))
for f in (f1,f2,f3,f4):
print(" {:^10s}{:.4f} secs".format(f.__name__, timeit.timeit("tf(f, k, x)", setup="from __main__ import f, tf, x, k", number=10)))
And the results:
f1 deque([(99993, 99994, 99995), (99996, 99997, 99998), (99999,)], maxlen=3)
f2 deque([(99993, 99994, 99995), (99996, 99997, 99998), (99999,)], maxlen=3)
f3 deque([(99993, 99994, 99995), (99996, 99997, 99998), (99999,)], maxlen=3)
f4 deque([(99993, 99994, 99995), (99996, 99997, 99998), (99999,)], maxlen=3)
Case small, 10,000 x 3
f1 0.0125 secs
f2 0.0231 secs
f3 0.0185 secs
f4 0.0250 secs
Case med, 100,000 x 3
f1 0.1239 secs
f2 0.2270 secs
f3 0.1845 secs
f4 0.2477 secs
Case large, 1,000,000 x 3
f1 1.2140 secs
f2 2.2431 secs
f3 1.7967 secs
f4 2.4697 secs
This solution with a generator function is fairly explicit and short:
def g3(seq):
it = iter(seq)
while True:
head = list(itertools.islice(it, 3))
if head:
yield head
else:
break
The itertools recipe section of the doc offers various generator helpers.
Here you can modify take with the second form of iter to create a chunk generator.
from itertools import islice
def chunks(n, it):
it = iter(it)
return iter(lambda: tuple(islice(it, n)), ())
Example
li = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(*chunks(3, li))
Output
(0, 1, 2) (3, 4, 5) (6, 7, 8) (9, 10)
more_itertools.chunked:
list(more_itertools.chunked(range(11), 3))
# [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
See also the source:
iter(functools.partial(more_itertools.take, n, iter(iterable)), [])
My attempt using groupby and cycle. With cycle you can choose a pattern how to group your elements, so it's versatile:
from itertools import groupby, cycle
gen=(e for e in range(11))
d = [list(g) for d, g in groupby(gen, key=lambda v, c=cycle('000111'): next(c))]
print([v for v in d])
Outputs:
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
we can do this by using grouper function given in itertools documentation page.
from itertools import zip_longest
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx
args = [iter(iterable)] * n
return zip_longest(fillvalue=fillvalue, *args)
def out_iterator(lst):
for each in grouper(lst,n):
if None in each:
yield each[:each.index(None)]
else:
yield each
a=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
n=3
print(list(out_iterator(a)))
Output:
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10)]
I am trying to write a function in python. The function is based on a algorithm. It is summation using sides of polygons with n sides.
For each "loop" you add n[i]+n[1+i].
In python can you do this with for loops?
This is a very easy thing to do in languages like java and c++. But the nature of python for loops make it less obvious. Can for loops accomplish this or should while loops be use?
You can use zip and for-loop here:
>>> lis = range(10)
>>> [x+y for x, y in zip(lis, lis[1:])]
[1, 3, 5, 7, 9, 11, 13, 15, 17]
If the list is huge then you can use itertools.izip and iter:
from itertools import izip, tee
it1, it2 = tee(lis) #creates two iterators from the list(or any iterable)
next(it2) #drop the first item
print [x+y for x, y in izip(it1, it2)]
#[1, 3, 5, 7, 9, 11, 13, 15, 17]
for i in range(N): # i = 0,1, ... N-1
val = n[i] + n[i+1]
if you want to 'wrap around', you can write
for i in range(N): # i = 0,1, ... N-1
val = n[i] + n[(i+1)%N]
.. or use the fact that n[-1] is the same as the last element
for i in range(N): # i = 0,1, ... N-1
val = n[i-1] + n[i] # [N-1]+[0], [0]+[1], ... [N-2] + [N-1]
This approach will likely be slower but may be easier to follow than zips and iterations.
Lets say I have a tuple generator, which I simulate as follows:
g = (x for x in (1,2,3,97,98,99))
For this specific generator, I wish to write a function to output the following:
(1,2,3)
(2,3,97)
(3,97,98)
(97,98,99)
(98,99)
(99)
So I'm iterating over three consecutive items at a time and printing them, except when I approach the end.
Should the first line in my function be:
t = tuple(g)
In other words, is it best to work on a tuple directly or might it be beneficial to work with a generator. If it is possible to approach this problem using both methods, please state the benefits and disadvantages for both approaches. Also, if it might be wise to use the generator approach, how might such a solution look?
Here's what I currently do:
def f(data, l):
t = tuple(data)
for j in range(len(t)):
print(t[j:j+l])
data = (x for x in (1,2,3,4,5))
f(data,3)
UPDATE:
Note that I've updated my function to take a second argument specifying the length of the window.
A specific example for returning three items could read
def yield3(gen):
b, c = gen.next(), gen.next()
try:
while True:
a, b, c = b, c, gen.next()
yield (a, b, c)
except StopIteration:
yield (b, c)
yield (c,)
g = (x for x in (1,2,3,97,98,99))
for l in yield3(g):
print l
Actually there're functions for this in itertools module - tee() and izip_longest():
>>> from itertools import izip_longest, tee
>>> g = (x for x in (1,2,3,97,98,99))
>>> a, b, c = tee(g, 3)
>>> next(b, None)
>>> next(c, None)
>>> next(c, None)
>>> [[x for x in l if x is not None] for l in izip_longest(a, b, c)]
[(1, 2, 3), (2, 3, 97), (3, 97, 98), (97, 98, 99), (98, 99), (99)]
from documentation:
Return n independent iterators from a single iterable. Equivalent to:
def tee(iterable, n=2):
it = iter(iterable)
deques = [collections.deque() for i in range(n)]
def gen(mydeque):
while True:
if not mydeque: # when the local deque is empty
newval = next(it) # fetch a new value and
for d in deques: # load it to all the deques
d.append(newval)
yield mydeque.popleft()
return tuple(gen(d) for d in deques)
If you might need to take more than three elements at a time, and you don't want to load the whole generator into memory, I suggest using a deque from the collections module in the standard library to store the current set of items. A deque (pronounced "deck" and meaning "double-ended queue") can have values pushed and popped efficiently from both ends.
from collections import deque
from itertools import islice
def get_tuples(gen, n):
q = deque(islice(gen, n)) # pre-load the queue with `n` values
while q: # run until the queue is empty
yield tuple(q) # yield a tuple copied from the current queue
q.popleft() # remove the oldest value from the queue
try:
q.append(next(gen)) # try to add a new value from the generator
except StopIteration:
pass # but we don't care if there are none left
actually it depends.
A generator might be useful in case of very large collections, where you dont really need to store them all in memory to achieve the result you want.
On the other hand, you have to print it is seems safe to guess that the collection isn't huge, so it doesn make a difference.
However, this is a generator that achieve what you were looking for
def part(gen, size):
t = tuple()
try:
while True:
l = gen.next()
if len(t) < size:
t = t + (l,)
if len(t) == size:
yield t
continue
if len(t) == size:
t = t[1:] + (l,)
yield t
continue
except StopIteration:
while len(t) > 1:
t = t[1:]
yield t
>>> a = (x for x in range(10))
>>> list(part(a, 3))
[(0, 1, 2), (1, 2, 3), (2, 3, 4), (3, 4, 5), (4, 5, 6), (5, 6, 7), (6, 7, 8), (7, 8, 9), (8, 9), (9,)]
>>> a = (x for x in range(10))
>>> list(part(a, 5))
[(0, 1, 2, 3, 4), (1, 2, 3, 4, 5), (2, 3, 4, 5, 6), (3, 4, 5, 6, 7), (4, 5, 6, 7, 8), (5, 6, 7, 8, 9), (6, 7, 8, 9), (7, 8, 9), (8, 9), (9,)]
>>>
note: the code actually isn't very elegant but it works also when you have to split in, say, 5 pieces
It's definitely best to work with the generator because you don't want to have to hold everything in memory.
It can be done very simply with a deque.
from collections import deque
from itertools import islice
def overlapping_chunks(size, iterable, *, head=False, tail=False):
"""
Get overlapping subsections of an iterable of a specified size.
print(*overlapping_chunks(3, (1,2,3,97,98,99)))
#>>> [1, 2, 3] [2, 3, 97] [3, 97, 98] [97, 98, 99]
If head is given, the "warm up" before the specified maximum
number of items is included.
print(*overlapping_chunks(3, (1,2,3,97,98,99), head=True))
#>>> [1] [1, 2] [1, 2, 3] [2, 3, 97] [3, 97, 98] [97, 98, 99]
If head is truthy, the "warm up" before the specified maximum
number of items is included.
print(*overlapping_chunks(3, (1,2,3,97,98,99), head=True))
#>>> [1] [1, 2] [1, 2, 3] [2, 3, 97] [3, 97, 98] [97, 98, 99]
If tail is truthy, the "cool down" after the iterable is exhausted
is included.
print(*overlapping_chunks(3, (1,2,3,97,98,99), tail=True))
#>>> [1, 2, 3] [2, 3, 97] [3, 97, 98] [97, 98, 99] [98, 99] [99]
"""
chunker = deque(maxlen=size)
iterator = iter(iterable)
for item in islice(iterator, size-1):
chunker.append(item)
if head:
yield list(chunker)
for item in iterator:
chunker.append(item)
yield list(chunker)
if tail:
while len(chunker) > 1:
chunker.popleft()
yield list(chunker)
I think what you currently do seems a lot easier than any of the above. If there isn't any particular need to make it more complicated, my opinion would be to keep it simple. In other words, it is best to work on a tuple directly.
Here's a generator that works in both Python 2.7.17 and 3.8.1. Internally it uses iterators and generators whenever possible, so it should be relatively memory efficient.
try:
from itertools import izip, izip_longest, takewhile
except ImportError: # Python 3
izip = zip
from itertools import zip_longest as izip_longest, takewhile
def tuple_window(n, iterable):
iterators = [iter(iterable) for _ in range(n)]
for n, iterator in enumerate(iterators):
for _ in range(n):
next(iterator)
_NULL = object() # Unique singleton object.
for t in izip_longest(*iterators, fillvalue=_NULL):
yield tuple(takewhile(lambda v: v is not _NULL, t))
if __name__ == '__main__':
data = (1, 2, 3, 97, 98, 99)
for t in tuple_window(3, data):
print(t)
Output:
(1, 2, 3)
(2, 3, 97)
(3, 97, 98)
(97, 98, 99)
(98, 99)
(99,)
I have a sparse matrix. I need to sort this matrix row-by-row and create another [sparse] matrix.
Code may explain it better:
# for `rand` function, you need newer version of scipy.
from scipy.sparse import *
m = rand(6,6, density=0.6)
d = m.getrow(0)
print d
Output1
(0, 5) 0.874881629788
(0, 4) 0.352559852239
(0, 2) 0.504791645463
(0, 1) 0.885898140175
I have this m matrix. I want to create a new matrix with sorted version of m. The new matrix
contains 0'th row like this.
new_d = new_m.getrow(0)
print new_d
Output2
(0, 1) 0.885898140175
(0, 5) 0.874881629788
(0, 2) 0.504791645463
(0, 4) 0.352559852239
So I can obtain which column is bigger etc:
print new_d.indices
Output3
array([1, 5, 2, 4])
Of course every row should be sorted like above independently.
I have one solution for this problem but it is not elegant.
If you're willing to ignore the zero-value elements of the matrix, the code below should work. It is also much faster than implementations that use the getrow method, which is rather slow.
from itertools import izip
def sort_coo(m):
tuples = izip(m.row, m.col, m.data)
return sorted(tuples, key=lambda x: (x[0], x[2]))
For example:
>>> from numpy.random import rand
>>> from scipy.sparse import coo_matrix
>>>
>>> d = rand(10, 20)
>>> d[d > .05] = 0
>>> s = coo_matrix(d)
>>> sort_coo(s)
[(0, 2, 0.004775589084940246),
(3, 12, 0.029941507166614145),
(5, 19, 0.015030386789436245),
(7, 0, 0.0075044957259399192),
(8, 3, 0.047994403933129481),
(8, 5, 0.049401058471327031),
(9, 15, 0.040011608000125043),
(9, 8, 0.048541825332137023)]
Depending on your needs you may want to tweak the sort keys in the lambda or further process the output. If you want everything in a row indexed dictionary you could do:
from collections import defaultdict
sorted_rows = defaultdict(list)
for i in sort_coo(m):
sorted_rows[i[0]].append((i[1], i[2]))
My bad solution is like this:
from scipy.sparse import coo_matrix
import numpy as np
a = []
for i in xrange(m.shape[0]): # assume m is square matrix.
d = m.getrow(i)
n = len(d.indices)
s = zip([i]*n, d.indices, d.data)
sorted_s = sorted(s, key=lambda v: v[2], reverse=True)
a.extend(sorted_s)
a = np.array(a)
new_m = coo_matrix((a[:,2], (a[:,0], a[:,1])), m.shape)
There can be some simple mistakes above because I have not checked it yet. But the idea is intuitive, I guess. Is there any good solution?
Edit
This new matrix creation may be useless because if you call getrow method then the order is broken again.
Only coo_matrix.col keeps the order.
Another Solution
This one is not exact solution but it may be helpful:
def sortSparseMatrix(m, rev=True, only_indices=True):
""" Sort a sparse matrix and return column index dictionary
"""
col_dict = dict()
for i in xrange(m.shape[0]): # assume m is square matrix.
d = m.getrow(i)
s = zip(d.indices, d.data)
sorted_s = sorted(s, key=lambda v: v[1], reverse=True)
if only_indices:
col_dict[i] = [element[0] for element in sorted_s]
else:
col_dict[i] = sorted_s
return col_dict
>>> print sortSparseMatrix(m)
{0: [5, 1, 0],
1: [1, 3, 5],
2: [1, 2, 3, 4],
3: [1, 5, 2, 4],
4: [0, 3, 5, 1],
5: [3, 4, 2]}
I want an algorithm to iterate over list slices. Slices size is set outside the function and can differ.
In my mind it is something like:
for list_of_x_items in fatherList:
foo(list_of_x_items)
Is there a way to properly define list_of_x_items or some other way of doing this using python 2.5?
edit1: Clarification Both "partitioning" and "sliding window" terms sound applicable to my task, but I am no expert. So I will explain the problem a bit deeper and add to the question:
The fatherList is a multilevel numpy.array I am getting from a file. Function has to find averages of series (user provides the length of series) For averaging I am using the mean() function. Now for question expansion:
edit2: How to modify the function you have provided to store the extra items and use them when the next fatherList is fed to the function?
for example if the list is lenght 10 and size of a chunk is 3, then the 10th member of the list is stored and appended to the beginning of the next list.
Related:
What is the most “pythonic” way to iterate over a list in chunks?
If you want to divide a list into slices you can use this trick:
list_of_slices = zip(*(iter(the_list),) * slice_size)
For example
>>> zip(*(iter(range(10)),) * 3)
[(0, 1, 2), (3, 4, 5), (6, 7, 8)]
If the number of items is not dividable by the slice size and you want to pad the list with None you can do this:
>>> map(None, *(iter(range(10)),) * 3)
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, None, None)]
It is a dirty little trick
OK, I'll explain how it works. It'll be tricky to explain but I'll try my best.
First a little background:
In Python you can multiply a list by a number like this:
[1, 2, 3] * 3 -> [1, 2, 3, 1, 2, 3, 1, 2, 3]
([1, 2, 3],) * 3 -> ([1, 2, 3], [1, 2, 3], [1, 2, 3])
And an iterator object can be consumed once like this:
>>> l=iter([1, 2, 3])
>>> l.next()
1
>>> l.next()
2
>>> l.next()
3
The zip function returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables. For example:
zip([1, 2, 3], [20, 30, 40]) -> [(1, 20), (2, 30), (3, 40)]
zip(*[(1, 20), (2, 30), (3, 40)]) -> [[1, 2, 3], [20, 30, 40]]
The * in front of zip used to unpack arguments. You can find more details here.
So
zip(*[(1, 20), (2, 30), (3, 40)])
is actually equivalent to
zip((1, 20), (2, 30), (3, 40))
but works with a variable number of arguments
Now back to the trick:
list_of_slices = zip(*(iter(the_list),) * slice_size)
iter(the_list) -> convert the list into an iterator
(iter(the_list),) * N -> will generate an N reference to the_list iterator.
zip(*(iter(the_list),) * N) -> will feed those list of iterators into zip. Which in turn will group them into N sized tuples. But since all N items are in fact references to the same iterator iter(the_list) the result will be repeated calls to next() on the original iterator
I hope that explains it. I advice you to go with an easier to understand solution. I was only tempted to mention this trick because I like it.
If you want to be able to consume any iterable you can use these functions:
from itertools import chain, islice
def ichunked(seq, chunksize):
"""Yields items from an iterator in iterable chunks."""
it = iter(seq)
while True:
yield chain([it.next()], islice(it, chunksize-1))
def chunked(seq, chunksize):
"""Yields items from an iterator in list chunks."""
for chunk in ichunked(seq, chunksize):
yield list(chunk)
Use a generator:
big_list = [1,2,3,4,5,6,7,8,9]
slice_length = 3
def sliceIterator(lst, sliceLen):
for i in range(len(lst) - sliceLen + 1):
yield lst[i:i + sliceLen]
for slice in sliceIterator(big_list, slice_length):
foo(slice)
sliceIterator implements a "sliding window" of width sliceLen over the squence lst, i.e. it produces overlapping slices: [1,2,3], [2,3,4], [3,4,5], ... Not sure if that is the OP's intention, though.
Do you mean something like:
def callonslices(size, fatherList, foo):
for i in xrange(0, len(fatherList), size):
foo(fatherList[i:i+size])
If this is roughly the functionality you want you might, if you desire, dress it up a bit in a generator:
def sliceup(size, fatherList):
for i in xrange(0, len(fatherList), size):
yield fatherList[i:i+size]
and then:
def callonslices(size, fatherList, foo):
for sli in sliceup(size, fatherList):
foo(sli)
Answer to the last part of the question:
question update: How to modify the
function you have provided to store
the extra items and use them when the
next fatherList is fed to the
function?
If you need to store state then you can use an object for that.
class Chunker(object):
"""Split `iterable` on evenly sized chunks.
Leftovers are remembered and yielded at the next call.
"""
def __init__(self, chunksize):
assert chunksize > 0
self.chunksize = chunksize
self.chunk = []
def __call__(self, iterable):
"""Yield items from `iterable` `self.chunksize` at the time."""
assert len(self.chunk) < self.chunksize
for item in iterable:
self.chunk.append(item)
if len(self.chunk) == self.chunksize:
# yield collected full chunk
yield self.chunk
self.chunk = []
Example:
chunker = Chunker(3)
for s in "abcd", "efgh":
for chunk in chunker(s):
print ''.join(chunk)
if chunker.chunk: # is there anything left?
print ''.join(chunker.chunk)
Output:
abc
def
gh
I am not sure, but it seems you want to do what is called a moving average. numpy provides facilities for this (the convolve function).
>>> x = numpy.array(range(20))
>>> x
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19])
>>> n = 2 # moving average window
>>> numpy.convolve(numpy.ones(n)/n, x)[n-1:-n+1]
array([ 0.5, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5,
9.5, 10.5, 11.5, 12.5, 13.5, 14.5, 15.5, 16.5, 17.5, 18.5])
The nice thing is that it accomodates different weighting schemes nicely (just change numpy.ones(n) / n to something else).
You can find a complete material here:
http://www.scipy.org/Cookbook/SignalSmooth
Expanding on the answer of #Ants Aasma: In Python 3.7 the handling of the StopIteration exception changed (according to PEP-479). A compatible version would be:
from itertools import chain, islice
def ichunked(seq, chunksize):
it = iter(seq)
while True:
try:
yield chain([next(it)], islice(it, chunksize - 1))
except StopIteration:
return
Your question could use some more detail, but how about:
def iterate_over_slices(the_list, slice_size):
for start in range(0, len(the_list)-slice_size):
slice = the_list[start:start+slice_size]
foo(slice)
For a near-one liner (after itertools import) in the vein of Nadia's answer dealing with non-chunk divisible sizes without padding:
>>> import itertools as itt
>>> chunksize = 5
>>> myseq = range(18)
>>> cnt = itt.count()
>>> print [ tuple(grp) for k,grp in itt.groupby(myseq, key=lambda x: cnt.next()//chunksize%2)]
[(0, 1, 2, 3, 4), (5, 6, 7, 8, 9), (10, 11, 12, 13, 14), (15, 16, 17)]
If you want, you can get rid of the itertools.count() requirement using enumerate(), with a rather uglier:
[ [e[1] for e in grp] for k,grp in itt.groupby(enumerate(myseq), key=lambda x: x[0]//chunksize%2) ]
(In this example the enumerate() would be superfluous, but not all sequences are neat ranges like this, obviously)
Nowhere near as neat as some other answers, but useful in a pinch, especially if already importing itertools.
A function that slices a list or an iterator into chunks of a given size. Also handles the case correctly if the last chunk is smaller:
def slice_iterator(data, slice_len):
it = iter(data)
while True:
items = []
for index in range(slice_len):
try:
item = next(it)
except StopIteration:
if items == []:
return # we are done
else:
break # exits the "for" loop
items.append(item)
yield items
Usage example:
for slice in slice_iterator([1,2,3,4,5,6,7,8,9,10],3):
print(slice)
Result:
[1, 2, 3]
[4, 5, 6]
[7, 8, 9]
[10]