Speed up Multiset Permutations - python

I'm looking to speed up my code that takes ~80 milliseconds for 300 sets to generate multiset_permutations from sympy. Ideally this would take only a few milliseconds; also the more items, the slower it gets.
What can I do to make my code faster? Multi-threading? Or convert to C? Any help here on speeding this up would be greatly appreciated.
import numpy as np
from time import monotonic
from sympy.utilities.iterables import multiset_permutations
milli_time = lambda: int(round(monotonic() * 1000))
start_time = milli_time()
num_indices = 5
num_items = 300
indices = np.array([list(multiset_permutations(list(range(num_indices)))) for _ in range(num_items)])
print(indices)
[[[0 1 2 3 4]
[0 1 2 4 3]
[0 1 3 2 4]
...
[4 3 1 2 0]
[4 3 2 0 1]
[4 3 2 1 0]]
[[0 1 2 3 4]
[0 1 2 4 3]
[0 1 3 2 4]
...
[4 3 1 2 0]
[4 3 2 0 1]
[4 3 2 1 0]]
[[0 1 2 3 4]
[0 1 2 4 3]
[0 1 3 2 4]
...
[4 3 1 2 0]
[4 3 2 0 1]
[4 3 2 1 0]]
...
[[0 1 2 3 4]
[0 1 2 4 3]
[0 1 3 2 4]
...
[4 3 1 2 0]
[4 3 2 0 1]
[4 3 2 1 0]]
[[0 1 2 3 4]
[0 1 2 4 3]
[0 1 3 2 4]
...
[4 3 1 2 0]
[4 3 2 0 1]
[4 3 2 1 0]]
[[0 1 2 3 4]
[0 1 2 4 3]
[0 1 3 2 4]
...
[4 3 1 2 0]
[4 3 2 0 1]
[4 3 2 1 0]]]
print('Multiset Perms:', milli_time() - start_time, 'milliseconds')
Multiset Perms: 88 milliseconds
** Code Update to Reduce extra computations by 2/3 **
import itertools
import numpy as np
from time import time, monotonic
from sympy.utilities.iterables import multiset_permutations
milli_time = lambda: int(round(monotonic() * 1000))
start_time = milli_time()
num_colors = 5
color_range = list(range(num_colors))
total_media = 300
def all_perms(elements):
if len(elements) <= 1:
yield elements # Only permutation possible = no permutation
else:
# Iteration over the first element in the result permutation:
for (index, first_elmt) in enumerate(elements):
other_elmts = elements[:index]+elements[index+1:]
for permutation in all_perms(other_elmts):
yield [first_elmt] + permutation
multiset = list(multiset_permutations(color_range))
# multiset = list(itertools.permutations(color_range))
# multiset = list(all_perms(color_range))
_range = range(total_media)
perm_indices = np.array([multiset for _ in _range])
print('Multiset Perms:', milli_time() - start_time)
Multiset Perms: 34 milliseconds

First of all, you do not need to recompute the permutations.
Moreover, np.array([multiset for _ in _range]) is expensive because Numpy have to transform multiset total_media times. You can solve that using np.array([multiset]).repeat(total_media, axis=0).
Finally, sympy is not the fastest implementation to perform such a computation. A faster implementation consists in using itertools instead:
num_colors = 5
total_media = 300
color_range = list(range(num_colors))
multiset = list(set(itertools.permutations(color_range)))
perm_indices = np.array([multiset], dtype=np.int32).repeat(total_media, axis=0)
However, this itertools-based implementation do not preserve the order of the permutations. If this is important, you can use np.sort on the Numpy array converted from multiset (with a specific axis and before applying repeat).
On my machine, this takes about 0.15 ms.

Related

Problem with understanding of work of np.argpartition

I have problem with execution of np.argpartition
I have nd.array
example = np.array([[5,6,7,3,4],[1,2,3,7,5],[6,7,4,2,3],[1,2,3,5,9],[2,3,6,1,2,]])
out: [[5 6 7 3 4]
[1 2 3 7 5]
[6 7 4 2 3]
[1 2 3 5 9]
[2 3 6 1 2]]
I can get indices for sorted array by np.argsort
print(np.argsort(example))
out:
[[3 4 0 1 2]
[0 1 2 4 3]
[3 4 2 0 1]
[0 1 2 3 4]
[3 0 4 1 2]]
I want to use np.argsort to economy some time for executing, because I need only 3 sorted element in each row of this array. I use this code to do it:
print(np.argpartition(example, 3, axis=1))
out: [[3 4 0 1 2]
[1 0 2 4 3]
[3 4 2 0 1]
[1 0 2 3 4]
[3 4 0 1 2]]
I expect that the first three indices of each row will match the indices in the sorted array, but this is not the caseю That doesn't work . I don't understand what I did wrong.
np.argpartition(example, k, axis=1) does not return sorted array for first k elements. It only returns indices such that only (k+1)th element is sorted. If you see in your output, only the 4th element matches with argsort()
If you want first three sorted elements, you have to give a list for k parameter
index_array = np.argpartition(example, [0,1,2], axis=1)
print(np.take_along_axis(example,index_array, axis=1)) ##this will give you first 3 sorted elements

How to overwrite 2-D numpy multi times symmetrically with given index?

I'm trying to change values in matrix a with given index matrix d and matrix e.
And the matrix should always be symmetrical.
What I come up with is to overwrite the primal matrix with given index, and try to make it symmetrical, then go for another overwrite, until all the given index matrix have been gone through. It's not efficient.
But I'm stuck with how make it symmetrical.
For example:
a = np.ones([4,4],dtype=np.object) #the primal matrix
d = np.array([[1],
[2],
[0],
[0]]) #the first index matrix
a[np.arange(a.shape[0])[:,None],d] =2 #the element change to 2 with the indexes shown in d matrix
Now the result is:
a = np.array([[1 2 1 1]
[1 1 2 1]
[2 1 1 1]
[2 1 1 1]])
After making it symmetrical (if a[ i ][ j ] was selected in d matrix, a[ j ][ i ] should also be changed to 2, how to do this part).
The expected output should be :
a = np.array([[1 2 2 2]
[2 1 2 1]
[2 2 1 1]
[2 1 1 1]])
Then, for another overwrite again:
e = np.array([[0],[2],[1],[1]])
a[np.arange(a.shape[0])[:,None],e] =3
Now the result is:
a = np.array([[3 2 2 2]
[2 1 3 1]
[2 3 1 1]
[2 3 1 1]])
Make it symmetrical, (I don't know how to do this part) the final output should be : (overwrite the values if they were given 2 or 1 before)
a = np.array([[3 2 2 2]
[2 1 3 3]
[2 3 1 1]
[2 3 1 1]])
What should I do to get symmetrical matrix?
And, is there anyway to change the primal matrix a directly to get the final result? In a more efficient way?
Thanks in advance !!
You can simply switch the first and second indices and apply the change, the result would be symmetrical:
a[np.arange(a.shape[0])[:,None], d] = 2
a[d, np.arange(a.shape[0])[:,None]] = 2
output:
[[1 2 2 2]
[2 1 2 1]
[2 2 1 1]
[2 1 1 1]]
Same with any number of other changes:
a[np.arange(a.shape[0])[:,None], e] = 3
a[e, np.arange(a.shape[0])[:,None]] = 3
output:
[[3 2 2 2]
[2 1 3 3]
[2 3 1 1]
[2 3 1 1]]

Python: get all the possible combinations for allocating x apples to y baskets subject to constraint

Suppose we have x apples and y baskets, we want to allocate all the apples to baskets such that each basket at most get z apples. How to write Python codes to get all possible combinations.
For small number of y, I can just loop with respect to y as follows (x=5, y=3, z=2).
all_chances = np.zeros((0,3))
for a in range(3):
for b in range(3):
for c in range(3):
if a+b+c == 5:
all_chances = np.vstack((all_chances, np.array([a,b,c])))
Basically, all_chances are
array([[1., 2., 2.],
[2., 1., 2.],
[2., 2., 1.]])
My question is: what if y is a large number, like x = 30, y = 26, z=2? Do I need to loop 26 times?
I messed around with your question... tried implementing a sort of tree-based approach bc I thought it'd be clever, but my laptop chokes on it. I was curious how many permutations we're looking for with these large numbers anyway, and changed the problem (for myself) to simply counting the permutations to see if it was even doable on a light-weight laptop.
I get 154,135,675,070 unique permutations.
To get started... I messed around with itertools, and permutations took forever with lists of length 26. So... to remind myself of the long-forgotten formula to at least count distinct permutations, I found this... https://socratic.org/questions/how-many-distinct-permutations-can-be-made-from-the-letters-of-the-word-infinity
With that I ran the following to get a count. It runs in under a second.
from numpy import prod
from math import factorial
import itertools
# number of unique permutations
def count_distinct_permutations(tup):
value_counts = [len(list(grp)) for _, grp in itertools.groupby(tup)]
return factorial(sum(value_counts)) / prod([float(factorial(x)) for x in value_counts])
# starting values
x = 30 # apples
y = 26 # baskets
z = 3 # max per basket
# count possible results
result = 0
for combos in itertools.combinations_with_replacement(range(z), y):
if sum(combos) == x:
result += count_distinct_permutations(combos)
Now... this obviously does NOT answer your specific question. Honestly I couldn't hold the result you're looking for in memory anyway. But... you can make some inferences with this... with your chosen values, there's only 12 combinations of values, but between 15k and 50 million permutations of each combination.
You could look at each combination... in the count_distinct_permutations() function, itertools.groupby feeds you how many of each number from (0,1,2) is in the combination, and you could work with each of those twelve results to infer some stuff. Not sure what, but I also am not quite sure what to do with 154 billion lists of length 26. :)
Hope there was something useful here, even if it didn't answer your exact question. Good luck!
Here is a method based on Young diagrams. For example, 4 baskets, 6 eggs, max 3 eggs per basket. If we order the baskets by how full they are we get Young diagrams.
x x x x x x x x x x x x x x x x
x x x x x x x x x x
x x x x
The code below enumerates all possible Young diagrams and for each enumerates all possible permutations.
The same logic can also be used to count.
from itertools import product, combinations
from functools import lru_cache
import numpy as np
def enum_ord_part(h, w, n, o=0):
if h == 1:
d = n
for idx in combinations(range(w), d):
idx = np.array(idx, int)
out = np.full(w, o)
out[idx] = o+1
yield out
else:
for d in range((n-1)//h+1, min(w, n) + 1):
for idx, higher in product(combinations(range(w), d),
enum_ord_part(h-1, d, n-d, o+1)):
idx = np.array(idx)
out = np.full(w, o)
out[idx] = higher
yield out
def bc(n, k):
if 2*k > n:
k = n-k
return np.prod(np.arange(n-k+1, n+1, dtype='O')) // np.prod(np.arange(1, k+1, dtype='O'))
#lru_cache(None)
def count_ord_part(h, w, n):
if h == 1:
return bc(w, n)
else:
return sum(bc(w, d) * count_ord_part(h-1, d, n-d)
for d in range((n-1)//h+1, min(w, n) + 1))
Few examples:
>>> for i, l in enumerate(enum_ord_part(3, 4, 6), 1):
... print(l, end=' ' if i % 8 else '\n')
...
[3 3 0 0] [3 0 3 0] [3 0 0 3] [0 3 3 0] [0 3 0 3] [0 0 3 3] [3 2 1 0] [2 3 1 0]
[3 1 2 0] [2 1 3 0] [1 3 2 0] [1 2 3 0] [2 2 2 0] [3 2 0 1] [2 3 0 1] [3 1 0 2]
[2 1 0 3] [1 3 0 2] [1 2 0 3] [2 2 0 2] [3 0 2 1] [2 0 3 1] [3 0 1 2] [2 0 1 3]
[1 0 3 2] [1 0 2 3] [2 0 2 2] [0 3 2 1] [0 2 3 1] [0 3 1 2] [0 2 1 3] [0 1 3 2]
[0 1 2 3] [0 2 2 2] [3 1 1 1] [1 3 1 1] [1 1 3 1] [1 1 1 3] [2 2 1 1] [2 1 2 1]
[2 1 1 2] [1 2 2 1] [1 2 1 2] [1 1 2 2]
>>>
>>> print(f'{count_ord_part(2, 26, 30):,}')
154,135,675,070
>>> print(f'{count_ord_part(50, 30, 1000):,}')
63,731,848,167,716,295,344,627,252,024,129,873,636,437,590,711

numpy: extract multiple subarrays of a position array in an efficient way

I have a 2D coefficient array COEFF with size row x col and a position array POS with size n x 2.
The goal is to create a batched array BAT with size n x (2*l) x (2*l) where l is the half length of subarray.
It looks like this
BAT[i, :, :] = COEFF[POS[i, 1] - l:POS[i, 1] + l, POS[i, 0] - l:POS[i, 0] + l]
It is possible to generate BAT based on above sequential code. However, I'm wondering is there an efficient way to construct the BAT array in parallel.
Thanks!
I'm not aware of a perfectly satisfactory solution to mixing advanced indexing and slicing in that way. But the following may be acceptable (assuming that by "parallel" you mean "vectorised"):
import numpy as np
nrow, ncol = 7, 7
n, l = 3, 2
coeff = np.random.randint(0,10, (nrow,ncol))
pos = np.c_[np.random.randint(l, nrow-l+1, (n,)),np.random.randint(l, ncol-l+1, (n,))]
i = (pos[:, :1] + np.arange(-l, l))[:, :, None]
j = (pos[:, 1:] + np.arange(-l, l))[:, None, :]
print(coeff, '\n')
print(pos, '\n')
print(coeff[i, j])
Prints:
# [[7 6 7 6 3 9 9]
# [3 6 8 3 4 8 6]
# [3 7 4 7 4 6 8]
# [0 7 2 3 7 0 4]
# [8 5 2 0 0 1 7]
# [4 6 1 9 4 5 4]
# [1 6 8 3 4 5 0]]
# [[2 2]
# [3 2]
# [2 4]]
# [[[7 6 7 6]
# [3 6 8 3]
# [3 7 4 7]
# [0 7 2 3]]
# [[3 6 8 3]
# [3 7 4 7]
# [0 7 2 3]
# [8 5 2 0]]
# [[7 6 3 9]
# [8 3 4 8]
# [4 7 4 6]
# [2 3 7 0]]]

Take non-zero elements in a macro-list

I have a problem with the instruction np.nonzero() in python. I want to take all the indices of a given list that are non zero. So, consider that I have the following code:
import numpy as np
from scipy.special import binom
M=4
N=3
def generate(N,nb):
states = np.zeros((int(binom(nb+N-1, nb)), N), dtype=int)
states[0, 0]=nb
ni = 0 # init
for i in xrange(1, states.shape[0]):
states[i,:N-1] = states[i-1, :N-1]
states[i,ni] -= 1
states[i,ni+1] += 1+states[i-1, N-1]
if ni >= N-2:
if np.any(states[i, :N-1]):
ni = np.nonzero(states[i, :N-1])[0][-1]
else:
ni += 1
return states
base = generate(M,N)
The result of base is given by:
base = [[3 0 0 0]
[2 1 0 0]
[2 0 1 0]
[2 0 0 1]
[1 2 0 0]
[1 1 1 0]
[1 1 0 1]
[1 0 2 0]
[1 0 1 1]
[1 0 0 2]
[0 3 0 0]
[0 2 1 0]
[0 2 0 1]
[0 1 2 0]
[0 1 1 1]
[0 1 0 2]
[0 0 3 0]
[0 0 2 1]
[0 0 1 2]
[0 0 0 3]]
The point is that for a given index j,k I want to take all the items in base that has non-zero components in the sites j,k, for example:
Taking j=0,k=1 I have to obtain:
result = [1 4 5 6]
which corresponds to the elements 1,4,5,6 of base that satisfies this condition. On the other hand, I have used the command:
np.nonzero((base[:, j]) & (base[:, k]))[0]
but it doesn't work correctly, any idea why?
First of all, the syntax for list index base[:, j] is wrong, use : [:][j] instead
also:
np.nonzero((base[:, j]) & (base[:, k]))[0]
won't work ,because the & sign is not applicable here..
you could use numpy like this:
b = np.array(base);
j=0;k=1;
np.nonzero(b.T[j]* b.T[k])[0]
which will give:
array([1, 4, 5, 6])

Categories

Resources