Python N nested loops - python

So this is an example of what I want to do.
def multi_range(range_func):
for a in range_func():
for b in range_func():
yield a, b
However, what I actually want is something that works for N loops. I would think something like this.
def multi_range(range_func, N, sofar=None, results=None):
if sofar is None:
sofar = []
if results is None:
results = []
for a in range_func():
if N == 1:
results.append(sofar + [a])
else:
multi_range(range_func, N - 1, sofar + [a], results)
return results
def test_range():
yield 0
yield 1
for b in multi_range(test_range, 3):
print(b)
This correctly outputs the following.
[0, 0, 0]
[0, 0, 1]
[0, 1, 0]
[0, 1, 1]
[1, 0, 0]
[1, 0, 1]
[1, 1, 0]
[1, 1, 1]
The issue here is it has to create the entire list and store that in memory.
If for example, test_range were this instead, it would consume a large amount of memory.
def test_range():
for x in range(10000):
yield x
(Yes I know it's weird to have a wrapper for range that behaves identically to range. It is just an example)
That is my question. How can I write a function that behaves like this one without storing all results in a list?

Use itertools.product:
>>> from itertools import product
>>> def test_range():
... yield 0
... yield 1
...
>>> for b in product(test_range(), repeat=3):
... print(b)
...
(0, 0, 0)
(0, 0, 1)
(0, 1, 0)
(0, 1, 1)
(1, 0, 0)
(1, 0, 1)
(1, 1, 0)
(1, 1, 1)
If you're curious how this is implemented, see the sample implementation in the linked doc.

Related

Find all combinations of 0, 1 that sum to 1 or 2 at varying lengths / sizes

I would like a function that would return 0, 1 combinations, that sum to 1 or 2, for varying length.
I know the total combinations (before removing the summations) follow 2**length.
Example:
length = 4
Result:
[[0, 0, 0, 1],
[0, 0, 1, 0],
[0, 0, 1, 1],
[0, 1, 0, 0],
[0, 1, 0, 1],
[0, 1, 1, 0],
[1, 0, 0, 0],
[1, 0, 0, 1],
[1, 0, 1, 0],
[1, 1, 0, 0]]
I was able to use a recursive function to get up to lengths of 10.
After that, python crashes due to recursion limit.
I did try increasing it, but this still results in the program crashing. I would like to be able to do all combinations that sum to 1 or 2 up to a length of 40.
That code is listed below:
def recursive_comb(bits):
"""Creates all possible combinations of 0 & 1
for a given length
"""
test = []
def calc_bits(bits, n=0):
if n.bit_length() <= bits:
comb_str = '{:0{}b}'.format(n, bits)
comb_list = [int(elem) for elem in comb_str]
test.append(comb_list)
calc_bits(bits, n + 1)
calc_bits(bits)
return test
all_comb = recursive_comb(4)
all_comb = [elem for elem in all_comb if ((sum(elem) == 1) or (sum(elem) == 2))]
if you don't mind using an external library (sympy) you could use this:
from sympy.utilities.iterables import multiset_permutations
length = 4
for n in range(1, 3):
lst = [1] * n + [0] * (length - n)
for perm in multiset_permutations(lst):
print(perm)
multiset_permutations generates all distinct permutations of a list with elements that are not pairwise different. i use this for lists with the desired numbers of 0 and 1.
if your lists contain many elements this will be much more efficient that just go through all possible permutations and discard the duplicates using a set.
You could simply do it like this:
from itertools import permutations
length = 4
result = {p for p in permutations([0, 1]*length, length) if sum(p) in [1, 2]}
print(result)
# output:
# {(0, 0, 0, 1),
# (0, 0, 1, 0),
# (0, 0, 1, 1),
# (0, 1, 0, 0),
# (0, 1, 0, 1),
# (0, 1, 1, 0),
# (1, 0, 0, 0),
# (1, 0, 0, 1),
# (1, 0, 1, 0),
# (1, 1, 0, 0)}
The resulting set contains all permutations that sum up to 1 or 2.
There is some redundant computations done in permutations, so it may take a while depending on length, but you shouldn't run into recursion / memory errors.
Here is another solution with itertools. For each length n, choose k positions for ones in combinations(n, k) ways. This approach is less general multiset_permutations from sympy, but is faster for this specific case:
import itertools
# notation:
# n: length of a sequence
# k: number of ones
def f(n, ks):
for k in ks:
for idx in itertools.combinations(range(n), k):
buf = [0] * n
for i in idx:
buf[i] = 1
yield buf
result = list(f(4, [1,2]))
Comparison with 20x speedup:
from sympy.utilities.iterables import multiset_permutations
def g(n, ks):
length = n
for k in ks:
lst = [1] * k + [0] * (n - k)
for perm in multiset_permutations(lst):
yield(perm)
assert sum(1 for _ in f(20, [1,2,3])) == sum(1 for _ in g(20, [1,2,3]))
%timeit sum(1 for _ in g(20, [1,2,3])) # 10ms
%timeit sum(1 for _ in f(20, [1,2,3])) # 500µs

List wrapping for finding distance between indices

I have a random generated list that could look like:
[1, 0, 0, 1, 1, 0, 1, 0, 0, 0]
I need to find all of the distance between the 1's including the ones that wrap around.
For an example the list above, the first 1 has a distance of 3 to the next 1. The second 1 has a distance of 1 to the following 1 and so on.
How do I find the distance for the last 1 in the list using wrap around to the first 1?
def calc_dist(loc_c):
first = []
#lst2 = []
count = 0
for i in range(len(loc_c)):
if loc_c[i] == 0:
count += 1
#lst2.append(0)
elif loc_c[i] == 1:
first.append(i)
count += 1
loc_c[i] = count
#lst2.append(loc_c[i])
#if loc_c[i] + count > len(loc_c):
# x = loc_c[first[0] + 11 % len(loc_c)]
# loc_c[i] = x
count = 0
return loc_c
My expected outcome should be [3, 1, 2, 4].
Store the index of the first 1 you first reference, then when you get to the last 1 you only have to add the index of the first plus the number of 0 elements after the last 1 to get that distance (so len(inputlist) - lastindex + firstindex).
The other distances are the difference between the preceding 1 value and the current index.
from typing import Any, Generator, Iterable
def distances(it: Iterable[Any]) -> Generator[int, None, None]:
"""Produce distances between true values in an iterable.
If the iterable is not endless, the final distance is that of the last
true value to the first as if the sequence of values looped round.
"""
first = prev = None
length = 0
for i, v in enumerate(it):
length += 1
if v:
if first is None:
first = i
else:
yield i - prev
prev = i
if first is not None:
yield length - prev + first
The above generator calculates distances as it loops over the sequence seq, yielding them one by one:
>>> for distance in distances([1, 0, 0, 1, 1, 0, 1, 0, 0, 0]):
... print(distance)
...
3
1
2
4
Just call list() on the generator if you must have list output:
>>> list(distances([1, 0, 0, 1, 1, 0, 1, 0, 0, 0]))
[3, 1, 2, 4]
If there are no 1 values, this results in zero distances yielded:
>>> list(distances([0, 0, 0]))
[]
and 1 1 value gives you 1 distance:
>>> list(distances([1, 0, 0]))
[3]
I've made the solution generic enough to be able to handle any iterable, even if infinite; this means you can use another generator to feed it too. If given an infinite iterable that produces at least some non-zero values, it'll just keep producing distances.
Nice and tidy:
def calc_dist(l):
idx = [i for i, v in enumerate(l) if v]
if not idx: return []
idx.append(len(l)+idx[0])
return [idx[i]-idx[i-1] for i in range(1,len(idx))]
print(calc_dist([1, 0, 0, 1, 1, 0, 1, 0, 0, 0]))
# [3, 1, 2, 4]
print(calc_dist([0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0]))
# [3, 1, 2, 7]
print(calc_dist([0, 0, 0, 0])
# []
You can use numpy:
import numpy as np
L = np.array([1, 0, 0, 1, 1, 0, 1, 0, 0, 0])
id = np.where(test == 1)[0]
# id = array([0, 3, 4, 6], dtype=int64)
res = [id[i]-id[i-1] for i in range(1, len(id))]
# [3, 1, 2]
# Last distance missing:
res.append(len(L)- id[-1])
res = [3, 1, 2, 4]
Note that the information you ask for is comprised above, but maybe the output format is wrong. You were not really specific...
Edit: How to convert list to an array since you generate random list
L = [1, 0, 0, 1, 1, 0, 1, 0, 0, 0]
np.asarray(L)
Edit2: How to check if there is no 1 in the list:
import numpy as np
L = np.array([1, 0, 0, 1, 1, 0, 1, 0, 0, 0])
id = np.where(test == 1)[0]
if len(id) == 0:
res = []
else:
res = [id[i]-id[i-1] for i in range(1, len(id))]
res.append(len(L)- id[-1])
OR:
try:
res = [id[i]-id[i-1] for i in range(1, len(id))]
res.append(len(L)- id[-1])
except:
res = []

How to apply bitwise-like XOR between lists of different lengths?

What i would like to do is to initialize an array which has 5 elements set to 0 and then copy the other array to the first one, something like this:
a = [0, 0, 0, 0, 0]
b = [1, 2, 3]
print a | b
[1, 2, 3, 0, 0]
Is there any pythonic way in doing so except:
for i, x in enumerate(b):
a[i] = x
Edit:
I forgot to mention that buffer a will always be filled with plain zeroes at the beginning and condition len(b) < len(a) is always true, also in each case buffer a will always start getting overwritten from index 0.
Ill explain why i need this kind of behaviour in the first place, basicly I have a raw 256-byte UDP frame. Buffer a corresponds to bytes 16-31 in the frame. Depending on some conditions, those bytes will either be overwritten or be set at 0, length of b is always 12.
def foo(b=12*[0]):
a = 16*[0]
return a[:12] = b[:]
>>> a[:len(b)] = b[:]
>>> a
[1, 2, 3, 0, 0]
This works in Python 2:
import itertools
a = [0, 0, 0, 0, 0]
b = [1, 2, 3]
g = (l | r for (l, r) in itertools.izip_longest(a, b, fillvalue=0))
print list(g)
And this in Python 3:
import itertools
a = [0, 0, 0, 0, 0]
b = [1, 2, 3]
g = (l | r for (l, r) in itertools.zip_longest(a, b, fillvalue=0))
print(list(g))
I created a generator g but if you know in advance you want all values of it already, then it's okay to have a list comprehension right away instead.
This is the doc for zip_longest: https://docs.python.org/3/library/itertools.html#itertools.zip_longest
Directly with the list comprehension (py3):
import itertools
a = [0, 0, 0, 0, 0]
b = [1, 2, 3]
g = [l | r for (l, r) in itertools.zip_longest(a, b, fillvalue=0)]
print(g)
Why waste time defining a in the first place? You can simply append the correct number of 0s to b instead:
>>> b = [1, 2, 3]
>>> a = b + [0] * (5 - len(b))
>>> a
[1, 2, 3, 0, 0]
Something like this (Note: XOR operator is ^ ?
import itertools
a = [0, 0, 0, 0, 0]
b = [1, 2, 3]
def safeXOR(arr1, arr2):
return list((x ^ y for (x, y) in itertools.zip_longest(arr1, arr2, fillvalue=0)))
print(safeXOR(a,b))

How to remove mirrors reflections values of itertools.product function?

I create a cartesian product using the itertools.product function:
from itertools import product
a = list(map(list, itertools.product(list(range(2)), repeat=3)))
Output:
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]
Then I get rid of mirrors reflections in the following way:
b = []
for k, v in enumerate(a):
if v[::-1] not in a[:k]:
b.append(v[::-1])
Output:
[[0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0], [1, 0, 1], [1, 1, 1]]
But can I get the same effect step by step without saving all the results of itertools.product in the list? For example, with the usual approach on the for loop:
for i in list(map(list, itertools.product(list(range(2)), repeat=3))):
# blah blah blah
Because ultimately I will use large cartesian products, at least repeat = 18. And that is why I have to give up the approach on the lists. Unless there is another way to do it? I will be grateful for any tips.
import itertools
l = (list(i) for i in itertools.product(tuple(range(2)), repeat=3) if tuple(reversed(i)) >= tuple(i))
print list(l)
Output:
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]
Here is an idea for a recursive algorithm to generate only the necessary combinations (as opposed to generate the whole Cartesian product and discarding the unnecessary ones):
def noReflections(n, k, current=None, idx=0, symmetric=True):
# n: number of distinct elements
# k: sequences length
# current: sequence being generated
# idx: current generated index
# symmetric: true if the chosen elements up to now are symmetric
assert n >= 0 and k >= 0
if n == 0 or k == 0:
return
if idx == 0:
current = k * [0]
if idx < k // 2:
# Choose the value for current position (idx) and symmetric (idx2)
idx2 = k - idx - 1
for i in range(n):
# Value for current position
current[idx] = i
# If all previously selected values were symmetric,
# the symmetric position must have a value equal or greater
# than the current; otherwise it can take any value.
first = i if symmetric else 0
for j in range(first, n):
# Value for symmetric position
current[idx2] = j
# Recursive call
# Only keep symmetric flag if previously selected values
# and the ones selected now are symmetric.
yield from noReflections(n, k, current, idx + 1, symmetric and (i == j))
elif idx == k // 2 and (k % 2 == 1):
# In middle position of odd-length sequence
# Make one sequence with each possible value
for i in range(n):
current[idx] = i
yield tuple(current)
else:
# Even-length sequence completed
yield tuple(current)
print(list(noReflections(2, 3)))
>>> [(0, 0, 0), (0, 1, 0), (0, 0, 1), (0, 1, 1), (1, 0, 1), (1, 1, 1)]
I'm not sure this should perform better than the other answer though, because of the recursion and so on (in a couple of quick tests bot performed similarly in my machine).

Python: Generating all n-length arrays combinations of values within a range

Ok. I'm looking for the smartest and more compact way to do this function
def f():
[[a,b,c] for a in range(6) for b in range(6) for c in range(6)]
which should generate all the combinations for the values a,b,c like this:
[0,0,0]
[0,0,1]
[0,0,2]
...
[1,0,0]
[1,0,1]
...
and so on...
But I want this to be flexible, so I can change the range or iterable, and also the length of the generated arrays. Range is an easy thing:
def f(min, max):
[[a,b,c] for a in range(min,max) for b in range(min,max) for c in range(min,max)]
This is ok for 3-length arrays, but I'm thinking now of making 4-length arrays or 7-length arrays and generate all combinations for them in the same range.
It has to exist an easy way, maybe with concatenating arrays or nesting comprehension lists in some way, but my solutions seem to bee too much complex.
Sorry for such a long post.
You can use itertools.product which is just a convenience function for nested iterations. It also has a repeat-argument if you want to repeat the same iterable multiple times:
>>> from itertools import product
>>> amin = 0
>>> amax = 2
>>> list(product(range(amin, amax), repeat=3))
[(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)]
To get the list of list you could use map:
>>> list(map(list, product(range(amin, amax), repeat=3)))
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]
However product is an iterator so it's really efficient if you just iterate over it instead of casting it to a list. At least if that's possible in your program. For example:
>>> for prod in product(range(amin, amax), repeat=3):
... print(prod) # one example
(0, 0, 0)
(0, 0, 1)
(0, 1, 0)
(0, 1, 1)
(1, 0, 0)
(1, 0, 1)
(1, 1, 0)
(1, 1, 1)
You can use itertools.product:
from itertools import product
def f(minimum, maximum, n):
return list(product(*[range(minimum, maximum)] * n))
Drop list to return a generator for memory efficiency.
itertools has everything you need. combinations_with_replacement will generate combinations of given length with repeating elements from given iterable. Note that returned value will be iterator.
def f(min, max, num):
return itertools.combinations_with_replacement(range(min, max), num)
A pure python implementation :
k=2 # k-uples
xmin=2
xmax=5
n=xmax-xmin
l1 = [x for x in range(n**k)]
l2 = [[ x//n**(k-j-1)%n for x in l1] for j in range(k)]
l3 = [[ xmin + l2[i][j] for i in range(k)] for j in range(n**k)]
l3 is :
[[2 2]
[2 3]
[2 4]
[3 2]
[3 3]
[3 4]
[4 2]
[4 3]
[4 4]]
What you are looking for is the cartesian product of the ranges. Luckily this already exists in itertools
import itertools
print(list(itertools.product(range(0,5), range(0,5), range(0,5))))

Categories

Resources