Ok. I'm looking for the smartest and more compact way to do this function
def f():
[[a,b,c] for a in range(6) for b in range(6) for c in range(6)]
which should generate all the combinations for the values a,b,c like this:
[0,0,0]
[0,0,1]
[0,0,2]
...
[1,0,0]
[1,0,1]
...
and so on...
But I want this to be flexible, so I can change the range or iterable, and also the length of the generated arrays. Range is an easy thing:
def f(min, max):
[[a,b,c] for a in range(min,max) for b in range(min,max) for c in range(min,max)]
This is ok for 3-length arrays, but I'm thinking now of making 4-length arrays or 7-length arrays and generate all combinations for them in the same range.
It has to exist an easy way, maybe with concatenating arrays or nesting comprehension lists in some way, but my solutions seem to bee too much complex.
Sorry for such a long post.
You can use itertools.product which is just a convenience function for nested iterations. It also has a repeat-argument if you want to repeat the same iterable multiple times:
>>> from itertools import product
>>> amin = 0
>>> amax = 2
>>> list(product(range(amin, amax), repeat=3))
[(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)]
To get the list of list you could use map:
>>> list(map(list, product(range(amin, amax), repeat=3)))
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]
However product is an iterator so it's really efficient if you just iterate over it instead of casting it to a list. At least if that's possible in your program. For example:
>>> for prod in product(range(amin, amax), repeat=3):
... print(prod) # one example
(0, 0, 0)
(0, 0, 1)
(0, 1, 0)
(0, 1, 1)
(1, 0, 0)
(1, 0, 1)
(1, 1, 0)
(1, 1, 1)
You can use itertools.product:
from itertools import product
def f(minimum, maximum, n):
return list(product(*[range(minimum, maximum)] * n))
Drop list to return a generator for memory efficiency.
itertools has everything you need. combinations_with_replacement will generate combinations of given length with repeating elements from given iterable. Note that returned value will be iterator.
def f(min, max, num):
return itertools.combinations_with_replacement(range(min, max), num)
A pure python implementation :
k=2 # k-uples
xmin=2
xmax=5
n=xmax-xmin
l1 = [x for x in range(n**k)]
l2 = [[ x//n**(k-j-1)%n for x in l1] for j in range(k)]
l3 = [[ xmin + l2[i][j] for i in range(k)] for j in range(n**k)]
l3 is :
[[2 2]
[2 3]
[2 4]
[3 2]
[3 3]
[3 4]
[4 2]
[4 3]
[4 4]]
What you are looking for is the cartesian product of the ranges. Luckily this already exists in itertools
import itertools
print(list(itertools.product(range(0,5), range(0,5), range(0,5))))
Related
I would like a function that would return 0, 1 combinations, that sum to 1 or 2, for varying length.
I know the total combinations (before removing the summations) follow 2**length.
Example:
length = 4
Result:
[[0, 0, 0, 1],
[0, 0, 1, 0],
[0, 0, 1, 1],
[0, 1, 0, 0],
[0, 1, 0, 1],
[0, 1, 1, 0],
[1, 0, 0, 0],
[1, 0, 0, 1],
[1, 0, 1, 0],
[1, 1, 0, 0]]
I was able to use a recursive function to get up to lengths of 10.
After that, python crashes due to recursion limit.
I did try increasing it, but this still results in the program crashing. I would like to be able to do all combinations that sum to 1 or 2 up to a length of 40.
That code is listed below:
def recursive_comb(bits):
"""Creates all possible combinations of 0 & 1
for a given length
"""
test = []
def calc_bits(bits, n=0):
if n.bit_length() <= bits:
comb_str = '{:0{}b}'.format(n, bits)
comb_list = [int(elem) for elem in comb_str]
test.append(comb_list)
calc_bits(bits, n + 1)
calc_bits(bits)
return test
all_comb = recursive_comb(4)
all_comb = [elem for elem in all_comb if ((sum(elem) == 1) or (sum(elem) == 2))]
if you don't mind using an external library (sympy) you could use this:
from sympy.utilities.iterables import multiset_permutations
length = 4
for n in range(1, 3):
lst = [1] * n + [0] * (length - n)
for perm in multiset_permutations(lst):
print(perm)
multiset_permutations generates all distinct permutations of a list with elements that are not pairwise different. i use this for lists with the desired numbers of 0 and 1.
if your lists contain many elements this will be much more efficient that just go through all possible permutations and discard the duplicates using a set.
You could simply do it like this:
from itertools import permutations
length = 4
result = {p for p in permutations([0, 1]*length, length) if sum(p) in [1, 2]}
print(result)
# output:
# {(0, 0, 0, 1),
# (0, 0, 1, 0),
# (0, 0, 1, 1),
# (0, 1, 0, 0),
# (0, 1, 0, 1),
# (0, 1, 1, 0),
# (1, 0, 0, 0),
# (1, 0, 0, 1),
# (1, 0, 1, 0),
# (1, 1, 0, 0)}
The resulting set contains all permutations that sum up to 1 or 2.
There is some redundant computations done in permutations, so it may take a while depending on length, but you shouldn't run into recursion / memory errors.
Here is another solution with itertools. For each length n, choose k positions for ones in combinations(n, k) ways. This approach is less general multiset_permutations from sympy, but is faster for this specific case:
import itertools
# notation:
# n: length of a sequence
# k: number of ones
def f(n, ks):
for k in ks:
for idx in itertools.combinations(range(n), k):
buf = [0] * n
for i in idx:
buf[i] = 1
yield buf
result = list(f(4, [1,2]))
Comparison with 20x speedup:
from sympy.utilities.iterables import multiset_permutations
def g(n, ks):
length = n
for k in ks:
lst = [1] * k + [0] * (n - k)
for perm in multiset_permutations(lst):
yield(perm)
assert sum(1 for _ in f(20, [1,2,3])) == sum(1 for _ in g(20, [1,2,3]))
%timeit sum(1 for _ in g(20, [1,2,3])) # 10ms
%timeit sum(1 for _ in f(20, [1,2,3])) # 500µs
For example, I would like to create a set with n 0s and m 1s, [0, 0, ...0, 1, 1, ..., 1]. Is there a way to do something like [0 for i in range(n), 1 for j in range(m)]?
You can create the two lists separately, then add them like this:
[0 for i in range(n)] + [1 for j in range(m)]
Or simply use list multiplication:
[0]*n + [1]*m
In the event you wanted to do this without having to add two lists together (e.g. if n and m are very large and you want to be able to generate this sequence without putting it all into a list), you could iterate over the combined range with a conditional:
>>> n, m = 2, 3
>>> [0 if i < n else 1 for i in range(n + m)]
[0, 0, 1, 1, 1]
You can also use itertools.chain to "add" two generators together:
>>> from itertools import chain
>>> list(chain((0 for _ in range(n)), (1 for _ in range(m))))
[0, 0, 1, 1, 1]
Stop thinking in terms of doing things in a single line. Think in terms of using useful, modular, extensible abstractions. For these sorts of things, itertools provides a lot of useful abstractions.
>>> from itertools import repeat, chain
>>> list(chain(repeat(0, 4), repeat(1, 3)))
[0, 0, 0, 0, 1, 1, 1]
Start building your own useful abstractions by combining smaller useful abstractions:
>>> from itertools import repeat, chain, starmap
>>> def builder(*pairs, consume=list):
... return consume(chain.from_iterable(starmap(repeat, pairs)))
...
>>> builder((0,3), (1, 4), (2,1), (3, 6))
[0, 0, 0, 1, 1, 1, 1, 2, 3, 3, 3, 3, 3, 3]
Although, sometimes, simpler is better, and learn to take advantage of the various syntactic sugars provided by the language:
>>> [*repeat(0, 4), *repeat(1, 3), *repeat(2,2)]
[0, 0, 0, 0, 1, 1, 1, 2, 2]
you could do:
lst = [0]*n+[1]*m
Yet another potential solution for your problem using list comprehension and 2 for loops
n,m = 3,4
[i for i,n_numbers in enumerate((n,m)) for _ in range(n_numbers) ]
So this is an example of what I want to do.
def multi_range(range_func):
for a in range_func():
for b in range_func():
yield a, b
However, what I actually want is something that works for N loops. I would think something like this.
def multi_range(range_func, N, sofar=None, results=None):
if sofar is None:
sofar = []
if results is None:
results = []
for a in range_func():
if N == 1:
results.append(sofar + [a])
else:
multi_range(range_func, N - 1, sofar + [a], results)
return results
def test_range():
yield 0
yield 1
for b in multi_range(test_range, 3):
print(b)
This correctly outputs the following.
[0, 0, 0]
[0, 0, 1]
[0, 1, 0]
[0, 1, 1]
[1, 0, 0]
[1, 0, 1]
[1, 1, 0]
[1, 1, 1]
The issue here is it has to create the entire list and store that in memory.
If for example, test_range were this instead, it would consume a large amount of memory.
def test_range():
for x in range(10000):
yield x
(Yes I know it's weird to have a wrapper for range that behaves identically to range. It is just an example)
That is my question. How can I write a function that behaves like this one without storing all results in a list?
Use itertools.product:
>>> from itertools import product
>>> def test_range():
... yield 0
... yield 1
...
>>> for b in product(test_range(), repeat=3):
... print(b)
...
(0, 0, 0)
(0, 0, 1)
(0, 1, 0)
(0, 1, 1)
(1, 0, 0)
(1, 0, 1)
(1, 1, 0)
(1, 1, 1)
If you're curious how this is implemented, see the sample implementation in the linked doc.
In Python, I have a list of ranges like that:
A = [range(0,2),range(0,4),range(0,3),range(0,3)]
First I have to convert all of these ranges into sets. I can create an empty set and add the resulting list to it. I would have:
B = [[0, 1], [0, 1, 2, 3], [0, 1, 2], [0, 1, 2]]
But after that, I have to create all the possible combinations of elements between the lists. The set with the lowest values would be [0, 0, 0, 0] and the highest values would be: [1, 3, 2, 2]. It would be a combination of 2x4x3x3 = 72 sets. How can I achieve this result, starting with the list of ranges (A)?
You can use the built-in itertools module to take the cartesian product of all the range objects in A, and skip making B altogether:
import itertools
A = [range(2), range(4), range(3), range(3)]
list(itertools.product(*A))
Output (skipping some items for readability):
[(0, 0, 0, 0),
(0, 0, 0, 1),
(0, 0, 0, 2),
(0, 0, 1, 0),
(0, 0, 1, 1),
.
.
.
(1, 3, 2, 2)]
Verifying the length:
>>> len(list(itertools.product(*A)))
72
Note that itertools.product() yields tuple objects. If for whatever reason you'd prefer these to be lists, you can use a comprehension:
[[*p] for p in itertools.product(*A)]
Another approach, as #don'ttalkjustcode points out, is that you can avoid creating A entirely and skip directly to the cartesian product via the map() function:
list(itertools.product(*map(range, (2, 4, 3, 3))))
However, this assumes that all your ranges start at 0.
You could generalize this mapping technique by using a lambda which will create range objects from a list of tuples:
>>> list(map(lambda t: range(*t), ((6, -3, -1), (0, 3), (5,), (10, 1, -2))))
[range(6, -3, -1), range(0, 3), range(0, 5), range(10, 1, -2)]
to get the Cartesian product do the following :
A = []
for i in range(2):
for j in range(4):
for k in range(3):
for n in range(3):
combo = [i,j,k,n]
A.append(combo)
I have a python array in which I want to calculate the sum of every 5 elements. In my case I have the array c with ten elements. (In reality it has a lot more elements.)
c = [1, 0, 0, 0, 0, 2, 0, 0, 0, 0]
So finally I would like to have a new array (c_new) which should show the sum of the first 5 elements, and the second 5 elements
So the result should be that one
1+0+0+0+0 = 1
2+0+0+0+0 = 2
c_new = [1, 2]
Thank you for your help
Markus
You can use np.add.reduceat by passing indices where you want to split and sum:
import numpy as np
c = [1, 0, 0, 0, 0, 2, 0, 0, 0, 0]
np.add.reduceat(c, np.arange(0, len(c), 5))
# array([1, 2])
Heres one way of doing it -
c = [1, 0, 0, 0, 0, 2, 0, 0, 0, 0]
print [sum(c[i:i+5]) for i in range(0, len(c), 5)]
Result -
[1, 2]
If five divides the length of your vector and it is contiguous then
np.reshape(c, (-1, 5)).sum(axis=-1)
It also works if it is non contiguous, but then it is typically less efficient.
Benchmark:
def aredat():
return np.add.reduceat(c, np.arange(0, len(c), 5))
def reshp():
np.reshape(c, (-1, 5)).sum(axis=-1)
c = np.random.random(10_000_000)
timeit(aredat, number=100)
3.8516048429883085
timeit(reshp, number=100)
3.09542763303034
So where possible, reshapeing seems a bit faster; reduceat has the advantage of gracefully handling non-multiple-of-five vectors.
why don't you use this ?
np.array([np.sum(i, axis = 0) for i in c.reshape(c.shape[0]//5,5,c.shape[1])])
There are various ways to achieve that. Will leave, below, two options using numpy built-in methods.
Option 1
numpy.sum and numpy.ndarray.reshape as follows
c_sum = np.sum(np.array(c).reshape(-1, 5), axis=1)
[Out]: array([1, 2])
Option 2
Using numpy.vectorize, a custom lambda function, and numpy.arange as follows
c_sum = np.vectorize(lambda x: sum(c[x:x+5]))(np.arange(0, len(c), 5))
[Out]: array([1, 2])