Given an array of integers size N, how can you efficiently find a subset of size K with elements that are closest to each other?
Let the closeness for a subset (x1,x2,x3,..xk) be defined as:
2 <= N <= 10^5
2 <= K <= N
constraints: Array may contain duplicates and is not guaranteed to be sorted.
My brute force solution is very slow for large N, and it doesn't check if there's more than 1 solution:
N = input()
K = input()
assert 2 <= N <= 10**5
assert 2 <= K <= N
a = []
for i in xrange(0, N):
a.append(input())
a.sort()
minimum = sys.maxint
startindex = 0
for i in xrange(0,N-K+1):
last = i + K
tmp = 0
for j in xrange(i, last):
for l in xrange(j+1, last):
tmp += abs(a[j]-a[l])
if(tmp > minimum):
break
if(tmp < minimum):
minimum = tmp
startindex = i #end index = startindex + K?
Examples:
N = 7
K = 3
array = [10,100,300,200,1000,20,30]
result = [10,20,30]
N = 10
K = 4
array = [1,2,3,4,10,20,30,40,100,200]
result = [1,2,3,4]
Your current solution is O(NK^2) (assuming K > log N). With some analysis, I believe you can reduce this to O(NK).
The closest set of size K will consist of elements that are adjacent in the sorted list. You essentially have to first sort the array, so the subsequent analysis will assume that each sequence of K numbers is sorted, which allows the double sum to be simplified.
Assuming that the array is sorted such that x[j] >= x[i] when j > i, we can rewrite your closeness metric to eliminate the absolute value:
Next we rewrite your notation into a double summation with simple bounds:
Notice that we can rewrite the inner distance between x[i] and x[j] as a third summation:
where I've used d[l] to simplify the notation going forward:
Notice that d[l] is the distance between each adjacent element in the list. Look at the structure of the inner two summations for a fixed i:
j=i+1 d[i]
j=i+2 d[i] + d[i+1]
j=i+3 d[i] + d[i+1] + d[i+2]
...
j=K=i+(K-i) d[i] + d[i+1] + d[i+2] + ... + d[K-1]
Notice the triangular structure of the inner two summations. This allows us to rewrite the inner two summations as a single summation in terms of the distances of adjacent terms:
total: (K-i)*d[i] + (K-i-1)*d[i+1] + ... + 2*d[K-2] + 1*d[K-1]
which reduces the total sum to:
Now we can look at the structure of this double summation:
i=1 (K-1)*d[1] + (K-2)*d[2] + (K-3)*d[3] + ... + 2*d[K-2] + d[K-1]
i=2 (K-2)*d[2] + (K-3)*d[3] + ... + 2*d[K-2] + d[K-1]
i=3 (K-3)*d[3] + ... + 2*d[K-2] + d[K-1]
...
i=K-2 2*d[K-2] + d[K-1]
i=K-1 d[K-1]
Again, notice the triangular pattern. The total sum then becomes:
1*(K-1)*d[1] + 2*(K-2)*d[2] + 3*(K-3)*d[3] + ... + (K-2)*2*d[K-2]
+ (K-1)*1*d[K-1]
Or, written as a single summation:
This compact single summation of adjacent differences is the basis for a more efficient algorithm:
Sort the array, order O(N log N)
Compute the differences of each adjacent element, order O(N)
Iterate over each N-K sequence of differences and calculate the above sum, order O(NK)
Note that the second and third step could be combined, although with Python your mileage may vary.
The code:
def closeness(diff,K):
acc = 0.0
for (i,v) in enumerate(diff):
acc += (i+1)*(K-(i+1))*v
return acc
def closest(a,K):
a.sort()
N = len(a)
diff = [ a[i+1] - a[i] for i in xrange(N-1) ]
min_ind = 0
min_val = closeness(diff[0:K-1],K)
for ind in xrange(1,N-K+1):
cl = closeness(diff[ind:ind+K-1],K)
if cl < min_val:
min_ind = ind
min_val = cl
return a[min_ind:min_ind+K]
itertools to the rescue?
from itertools import combinations
def closest_elements(iterable, K):
N = set(iterable)
assert(2 <= K <= len(N) <= 10**5)
combs = lambda it, k: combinations(it, k)
_abs = lambda it: abs(it[0] - it[1])
d = {}
v = 0
for x in combs(N, K):
for y in combs(x, 2):
v += _abs(y)
d[x] = v
v = 0
return min(d, key=d.get)
>>> a = [10,100,300,200,1000,20,30]
>>> b = [1,2,3,4,10,20,30,40,100,200]
>>> print closest_elements(a, 3); closest_elements(b, 4)
(10, 20, 30) (1, 2, 3, 4)
This procedure can be done with O(N*K) if A is sorted. If A is not sorted, then the time will be bounded by the sorting procedure.
This is based on 2 facts (relevant only when A is ordered):
The closest subsets will always be subsequent
When calculating the closeness of K subsequent elements, the sum of distances can be calculated as the sum of each two subsequent elements time (K-i)*i where i is 1,...,K-1.
When iterating through the sorted array, it is redundant to recompute the entire sum, we can instead remove K times the distance between the previously two smallest elements, and add K times the distance of the two new largest elements. this fact is being used to calculate the closeness of a subset in O(1) by using the closeness of the previous subset.
Here's the pseudo-code
List<pair> FindClosestSubsets(int[] A, int K)
{
List<pair> minList = new List<pair>;
int minVal = infinity;
int tempSum;
int N = A.length;
for (int i = K - 1; i < N; i++)
{
tempSum = 0;
for (int j = i - K + 1; j <= i; j++)
tempSum += (K-i)*i * (A[i] - A[i-1]);
if (tempSum < minVal)
{
minVal = tempSum;
minList.clear();
minList.add(new pair(i-K, i);
}
else if (tempSum == minVal)
minList.add(new pair(i-K, i);
}
return minList;
}
This function will return a list of pairs of indexes representing the optimal solutions (the starting and ending index of each solution), it was implied in the question that you want to return all solutions of the minimal value.
try the following:
N = input()
K = input()
assert 2 <= N <= 10**5
assert 2 <= K <= N
a = some_unsorted_list
a.sort()
cur_diff = sum([abs(a[i] - a[i + 1]) for i in range(K - 1)])
min_diff = cur_diff
min_last_idx = K - 1
for last_idx in range(K,N):
cur_diff = cur_diff - \
abs(a[last_idx - K - 1] - a[last_idx - K] + \
abs(a[last_idx] - a[last_idx - 1])
if min_diff > cur_diff:
min_diff = cur_diff
min_last_idx = last_idx
From the min_last_idx, you can calculate the min_first_idx. I use range to preserve the order of idx. If this is python 2.7, it will take linearly more RAM. This is the same algorithm that you use, but slightly more efficient (smaller constant in complexity), as it does less then summing all.
After sorting, we can be sure that, if x1, x2, ... xk are the solution, then x1, x2, ... xk are contiguous elements, right?
So,
take the intervals between numbers
sum these intervals to get the intervals between k numbers
Choose the smallest of them
My initial solution was to look through all the K element window and multiply each element by m and take the sum in that range, where m is initialized by -(K-1) and incremented by 2 in each step and take the minimum sum from the entire list. So for a window of size 3, m is -2 and the values for the range will be -2 0 2. This is because I observed a property that each element in the K window add a certain weight to the sum. For an example if the elements are [10 20 30] the sum is (30-10) + (30-20) + (20-10). So if we break down the expression we have 2*30 + 0*20 + (-2)*10. This can be achieved in O(n) time and the entire operation would be in O(NK) time. However it turns out that this solution is not optimal, and there are certain edge cases where this algorithm fails. I am yet to figure out those cases, but shared the solution anyway if anyone can figure out something useful from it.
for(i = 0 ;i <= n - k;++i)
{
diff = 0;
l = -(k-1);
for(j = i;j < i + k;++j)
{
diff += a[j]*l;
if(min < diff)
break;
l += 2;
}
if(j == i + k && diff > 0)
min = diff;
}
You can do this is O(n log n) time with a sliding window approach (O(n) if the array is already sorted).
First, suppose we've precomputed, at every index i in our array, the sum of distances from A[i] to the previous k-1 elements. The formula for that would be
(A[i] - A[i-1]) + (A[i] - A[i-2]) + ... + (A[i] - A[i-k+1]).
If i is less than k-1, we just compute the sum to the array boundary.
Suppose we also precompute, at every index i in our array, the sum of distances from A[i] to the next k-1 elements. Then we could solve the whole problem with a single pass of a sliding window.
If our sliding window is on [L, L+k-1] with closeness sum S, then the closeness sum for the interval [L+1, L+k] is just S - dist_sum_to_next[L] + dist_sum_to_prev[L+k]. The only changes in the sum of pairwise distances are removing all terms involving A[L] when it leaves our window, and adding all terms involving A[L+k] as it enters our window.
The only remaining part is how to compute, at a position i, the sum of distances between A[i] and the previous k-1 elements (the other computation is totally symmetric). If we know the distance sum at i-1, this is easy: subtract the distance from A[i-1] to A[i-k], and add in the extra distance from A[i-1] to A[i] k-1 times
dist_sum_to_prev[i] = (dist_sum_to_prev[i - 1] - (A[i - 1] - A[i - k])
+ (A[i] - A[i - 1]) * (k - 1)
Python code:
def closest_subset(nums: List[int], k: int) -> List[int]:
"""Given a list of n (poss. unsorted and non-unique) integers nums,
returns a (sorted) list of size k that minimizes the sum of pairwise
distances between all elements in the list.
Runs in O(n lg n) time, uses O(n) auxiliary space.
"""
n = len(nums)
assert len(nums) == n
assert 2 <= k <= n
nums.sort()
# Sum of pairwise distances to the next (at most) k-1 elements
dist_sum_to_next = [0] * n
# Sum of pairwise distances to the last (at most) k-1 elements
dist_sum_to_prev = [0] * n
for i in range(1, n):
if i >= k:
dist_sum_to_prev[i] = ((dist_sum_to_prev[i - 1] -
(nums[i - 1] - nums[i - k]))
+ (nums[i] - nums[i - 1]) * (k - 1))
else:
dist_sum_to_prev[i] = (dist_sum_to_prev[i - 1]
+ (nums[i] - nums[i - 1]) * i)
for i in reversed(range(n - 1)):
if i < n - k:
dist_sum_to_next[i] = ((dist_sum_to_next[i + 1]
- (nums[i + k] - nums[i + 1]))
+ (nums[i + 1] - nums[i]) * (k - 1))
else:
dist_sum_to_next[i] = (dist_sum_to_next[i + 1]
+ (nums[i + 1] - nums[i]) * (n-i-1))
best_sum = math.inf
curr_sum = 0
answer_right_bound = 0
for i in range(n):
curr_sum += dist_sum_to_prev[i]
if i >= k:
curr_sum -= dist_sum_to_next[i - k]
if curr_sum < best_sum and i >= k - 1:
best_sum = curr_sum
answer_right_bound = i
return nums[answer_right_bound - k + 1:answer_right_bound + 1]
Related
A common algorithm for solving the problem of finding the median of two sorted arrays of size m and n is to:
Run binary search to adjust "a cut" of the smaller array in two halves. When doing so, we adjust the cut of the larger array to make sure the total number of elements on the first halves of both arrays equals the total number of elements in the second halves of both arrays, which is a pre-condition for splitting both arrays around the median.
The binary search shifts the cuts left or right until all elements on the left halves <= all elements on the right halves.
At the end of the procedure, we can readily compute the median with a basic comparison of the elements on the boundary of the cuts of both arrays.
While I understand at a high level the algorithm, I'm not sure I understand why one needs to do the calculation on the smaller array, and adjust the larger array, as opposed to the other way around.
Here's a video explaining the algorithm, but the author doesn't explain exactly why we use the smaller array to drive the binary search.
I'm also including below Python code that is supposed to solve the problem, mostly to make the post self-contained, even if it's not well documented.
def median(A, B):
m, n = len(A), len(B)
if m > n:
## Making sure that A refers to the smaller array
A, B, m, n = B, A, n, m
if n == 0:
raise ValueError
imin, imax, half_len = 0, m, (m + n + 1) / 2
while imin <= imax:
i = (imin + imax) / 2
j = half_len - i
if i < m and B[j-1] > A[i]:
# i is too small, must increase it
imin = i + 1
elif i > 0 and A[i-1] > B[j]:
# i is too big, must decrease it
imax = i - 1
else:
# i is perfect
if i == 0: max_of_left = B[j-1]
elif j == 0: max_of_left = A[i-1]
else: max_of_left = max(A[i-1], B[j-1])
if (m + n) % 2 == 1:
return max_of_left
if i == m: min_of_right = B[j]
elif j == n: min_of_right = A[i]
else: min_of_right = min(A[i], B[j])
return (max_of_left + min_of_right) / 2.0
By enforcing m <= n, we make sure both i and j are always non-negative.
Also, we are able to reduce some redundant boundary checks in the while loop when working with i and j.
Take the first if condition in the while loop as an example, the code checks for i < m before accessing A[i], but why wouldn't it also check for j-1 >= 0 before accessing B[j-1]? This is because i falls into [0, m], and j = (m + n + 1) / 2 - i, so when i is the largest, j is the smallest.
When i < m, j = (m + n + 1)/2 - i > (m + n + 1)/2 - m = n/2 - m/2 + 1/2 >= 0. So j must be positive when i < m, and j - 1 >= 0.
Similarly, in the second if condition in the while loop, when i > 0, j is guaranteed to be less than n.
To verify this idea, you can try removing the size check and swap logic at the top, and run through below example input, in which A is longer than B.
[1,2,3,4,6]
[5]
Suppose I have a lattice
a = np.array([[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3],
[4, 4, 4, 4]])
I'd like to make a function func(lattice, start, end) that takes in 3 inputs, where start and end are the index of rows for which the function would sum the elements. For example, for func(a,1,3) it'll sum all the elements of those rows such that func(a,1,3) = 2+2+2+2+3+3+3+3+4+4+4+4 = 36.
Now I know this can be done easily with slicing and np.sum() whatever. But crucially what I want func to do is to also have the ability to wrap around. Namely func(a,2,4) should return 3+3+3+3+4+4+4+4+1+1+1+1.
Couple more examples would be
func(a,3,4) = 4+4+4+4+1+1+1+1
func(a,3,5) = 4+4+4+4+1+1+1+1+2+2+2+2
func(a,0,1) = 1+1+1+1+2+2+2+2
In my situation I'm never gonna get to a point where it'll sum the whole thing again i.e.
func(a,3,6) = sum of all elements
Update:
For my algorithm
for i in range(MC_STEPS_NODE):
sweep(lattice, prob, start_index_master, end_index_master,
rows_for_master)
# calculate the energy
Ene = subhamiltonian(lattice, start_index_master, end_index_master)
# calculate the magnetisation
Mag = mag(lattice, start_index_master, end_index_master)
E1 += Ene
M1 += Mag
E2 += Ene*Ene
M2 += Mag*Mag
if i % sites_for_master == 0:
comm.Send([lattice[start_index_master:start_index_master+1], L, MPI.INT],
dest=(rank-1)%size, tag=4)
comm.Recv([lattice[end_index_master:end_index_master+1], L, MPI.INT],
source = (rank+1)%size, tag=4)
start_index_master = (start_index_master + 1)
end_index_master = (end_index_master + 1)
if start_index_master > 100:
start_index_master = start_index_master % L
if end_index_master > 100:
end_index_master = end_index_master % L
The function I want is the mag() function which calculates the magnetisation of a sublattice which are just sum of all its elements. Imagine a LxL lattice split up into two sublattices, one belongs to the master and the other to the worker. Each sweep sweeps the corresponding sublattice of lattice with start_index_master and end_index_master determining the start and end row of the sublattice. For every i%sites_for_master = 0, the indices move down by adding 1, eventually mod by 100 to prevent memory overflow in mpi4py. So you can imagine if the sublattice is at the centre of the main lattice then start_index_master < end_index_master. Eventually the sublattice will keep moving down to the point where start_index_master < end_index_master where end_index_master > L, so in this case if start_index_master = 10 for a lattice L=10, the most bottom row of the sublattice is the first row ([0]) of the main lattice.
Energy function:
def subhamiltonian(lattice: np.ndarray, col_len_start: int,
col_len_end: int) -> float:
energy = 0
for i in range(col_len_start, col_len_end+1):
for j in range(len(lattice)):
spin = lattice[i%L, j]
nb_sum = lattice[(i%L+1) % L, j] + lattice[i%L, (j+1) % L] + \
lattice[(i%L-1) % L, j] + lattice[i%L, (j-1) % L]
energy += -nb_sum*spin
return energy/4.
This is my function for computing the energy of the sublattice.
You could use np.arange to create the indexes to be summed.
>>> def func(lattice, start, end):
... rows = lattice.shape[0]
... return lattice[np.arange(start, end+1) % rows].sum()
...
>>> func(a,3,4)
20
>>> func(a,3,5)
28
>>> func(a,0,1)
12
You can check if the stop index wraps-around and if it does add the sum from the beginning of the array to the result. This is efficient because it relies on slice indexing and only does extra work if necessary.
def func(a, start, stop):
stop += 1
result = np.sum(a[start:stop])
if stop > len(a):
result += np.sum(a[:stop % len(a)])
return result
The above version works for stop - start < len(a), i.e. no more than one full wrap-around. For an arbitrary number of wrap-around (i.e. arbitrary values for start and stop) the following version can be used:
def multi_wraps(a, start, stop):
result = 0
# Adjust both indices in case the start index wrapped around.
stop -= (start // len(a)) * len(a)
start %= len(a)
stop += 1 # Include the element pointed to by the stop index.
n_wraps = (stop - start) // len(a)
if n_wraps > 0:
result += n_wraps * a.sum()
stop = start + (stop - start) % len(a)
result += np.sum(a[start:stop])
if stop > len(a):
result += np.sum(a[:stop % len(a)])
return result
In case n_wraps > 0 some parts of the array will be summed twice which is unnecessarily inefficient, so we can compute the sum of the various array parts as necessary. The following version sums every array element at most once:
def multi_wraps_efficient(a, start, stop):
# Adjust both indices in case the start index wrapped around.
stop -= (start // len(a)) * len(a)
start %= len(a)
stop += 1 # Include the element pointed to by the stop index.
n_wraps = (stop - start) // len(a)
stop = start + (stop - start) % len(a) # Eliminate the wraps since they will be accounted for separately.
tail_sum = a[start:stop].sum()
if stop > len(a):
head_sum = a[:stop % len(a)].sum()
if n_wraps > 0:
remaining_sum = a[stop % len(a):start].sum()
elif n_wraps > 0:
head_sum = a[:start].sum()
remaining_sum = a[stop:].sum()
result = tail_sum
if stop > len(a):
result += head_sum
if n_wraps > 0:
result += n_wraps * (head_sum + tail_sum + remaining_sum)
return result
The following plot shows a performance comparison between using index arrays and the two multi-wrap methods presented above. The tests are run on a (1_000, 1_000) lattice. One can observe that for the multi_wraps method there is an increase in runtime when going from 1 to 2 wrap-around since it unnecessarily sums the array twice. The multi_wraps_efficient method has the same performance irregardless of the number of wrap-around since it sums every array element no more than once.
The performance plot was generated using the perfplot package:
perfplot.show(
setup=lambda n: (np.ones(shape=(1_000, 1_000), dtype=int), 400, n*1_000 + 200),
kernels=[
lambda x: index_arrays(*x),
lambda x: multi_wraps(*x),
lambda x: multi_wraps_efficient(*x),
],
labels=['index_arrays', 'multi_wraps', 'multi_wraps_efficient'],
n_range=range(1, 11),
xlabel="Number of wrap-around",
equality_check=lambda x, y: x == y,
)
I have a model for four possibilities of purchasing a pair items (purchasing both, none or just one) and need to optimize the (pseudo-) log-likelihood function. Part of this, of course, is the calculation/definition of the pseudo-log-likelihood function.
The following is my code, where Beta is a 2-d vector for each customer (there are U customers and U different beta vectors), X is a 2-d vector for each item (different for each of the N items) and Gamma is a symmetric matrix with a scalar value gamma(i,j) for each pair of items. And df is a dataframe of the purchases - one row for each customer and N columns for the items.
It would seem to me that all of these loops are inefficient and take up too much time, but I am not sure how to speed up this calculation and would appreciate any help improving it.
Thank you in advance!
def pseudo_likelihood(Args):
Beta = np.reshape(Args[0:2*U], (U, 2))
Gamma = np.reshape(Args[2*U:], (N,N))
L = 0
for u in range(0,U,1):
print datetime.datetime.today(), " for user {}".format(u)
y = df.loc[u][1:]
beta_u = Beta[u,:]
for l in range(N):
print datetime.datetime.today(), " for item {}".format(l)
for i in range(N-1):
if i == l:
continue
for j in range(i+1,N):
if (y[i] == y[j]):
if (y[i] == 1):
L += np.dot(beta_u,(x_vals.iloc[i,1:]+x_vals.iloc[j,1:])) + Gamma[i,j] #Log of the exponent of this expression
else:
L += np.log(
1 - np.exp(np.dot(beta_u, (x_vals.iloc[i, 1:] + x_vals.iloc[j, 1:])) + Gamma[i, j])
- np.exp(np.dot(beta_u, x_vals.iloc[i, 1:])) * (
1 - np.exp(np.dot(beta_u, x_vals.iloc[j, 1:])))
- np.exp(np.dot(beta_u, x_vals.iloc[j, 1:])) * (
1 - np.exp(np.dot(beta_u, x_vals.iloc[i, 1:]))))
else:
if (y[i] == 1):
L += np.dot(beta_u,x_vals.iloc[i,1:]) + np.log(1 - np.exp(np.dot(beta_u,x_vals.iloc[j,1:])))
else:
L += (np.dot(beta_u, x_vals.iloc[j,1:])) + np.log(1 - np.exp(np.dot(beta_u, x_vals.iloc[i,1:])))
L -= (N-2)*np.dot(beta_u,x_vals.iloc[l,1:])
for k in range(N):
if k != l:
L -= np.dot(beta_u, x_vals.iloc[k,1:])
return -L
To add/clarify - I am using this calculation to optimize and find the beta and gamma parameters that generated the data for this pseudo-likelihood function.
I am using scipy optimize.minimize with the 'Powell' method.
Updating for whomever is interested-
I found numpy.einsum to speed up the calculations here by over 90%.
np.einsum performs matrix/vector operations using Einstein notation. Recall that for two matrices A, B their product can be represented as the sum of
a_ij*b_jk
i.e. the ik element of the matrix AB is the sum over j of a_ij*b_jk
Using the einsum function I could calculate in advance all of the values necessary for the iterative calculation, saving precious time and hundreds, if not thousands, of unnecessary calculations.
I rewrote the code as follows:
def pseudo_likelihood(Args):
Beta = np.reshape(Args[0:2*U], (U,2))
Gamma = np.reshape(Args[2*U:], (N,N))
exp_gamma = np.exp(Gamma)
L = 0
for u in xrange(U):
y = df.loc[u][1:]
beta_u = Beta[u,:]
beta_dot_x = np.einsum('ij,j',x_vals[['V1','V2']],beta_u)
exp_beta_dot_x = np.exp(beta_dot_x)
log_one_minus_exp = np.log(1 - exp_beta_dot_x)
for l in xrange(N):
for i in xrange(N-1):
if i == l:
continue
for j in xrange(i+1,N):
if (y[i] == y[j]):
if (y[i] == 1):
L += beta_dot_x[i] + beta_dot_x[j] + Gamma[i,j] #Log of the exponent of this expression
else:
L += math.log(
1 - exp_beta_dot_x[i]*exp_beta_dot_x[j]*exp_gamma[i,j]
- exp_beta_dot_x[i] * (1 - exp_beta_dot_x[j])
- exp_beta_dot_x[j] * (1 - exp_beta_dot_x[i]))
else:
if (y[i] == 1):
L += beta_dot_x[i] + log_one_minus_exp[j]
else:
L += (beta_dot_x[j]) + log_one_minus_exp[i]
L -= (N-2)*beta_dot_x[l]
for k in xrange(N):
if k != l:
L -= sum(beta_dot_x) + beta_dot_x[l]
return -L
I want to compute the mean from the matrix TBM[x,y], where x and y are respectively the rows and columns. I want to compute the mean over not all rows (let see later why not all), in order to obtain TBM[s,y], where s is not equal to 1 but it could be s < x. Not all over the rows, because there is a condition: the time; If the t0 < t < t1 we compute the first mean. Next, we have t1 < t < t2, we compute the second mean, and so on...
So I could compute in 2 nested for cicle with a condition if over the time t, and put the mean in a new matrix. But how?
EDIT:
for t in time:
for i in range(kk, len(TBM)-1):
if (TBM[i,1] > t[j] and TBM[i,1] < t[j+1]):
sums = sums + TBM[i,2]
kk = kk + 1
means[t] = sums / kk
But it seems that is not working as I wanted.
I found a solution:
count = []
k = 0
for j in range(len(t)-1):
for i in range(k, len(TBM)-1):
if (TBM[i] > t[j] and TBM[i] < t[j+1]):
count.append(i)
k = k + 1
break
count.append(len(TBM)-1)
count = np.asarray(count)
data_mean = np.asarray([[0.0 for j in range(len(count)-1)] for i in range(np.size(TBM[0]))]).T
data_sum = np.asarray([[0.0 for j in range(len(count)-1)] for i in range(np.size(TBM[0]))]).T
for k in range (len(count)-1):
for j in range(np.size(TBM[0])):
for i in range (count[k], count[k+1]):
data_sum[k,j] = data_sum[k,j] + data_tbm1[i,j]
data_mean[k,j] = data_sum[k,j] / (count[k+1]-count[k])
Imagine you're trying to allocate some fixed resources (e.g. n=10) over some number of territories (e.g. t=5). I am trying to find out efficiently how to get all the combinations where the sum is n or below.
E.g. 10,0,0,0,0 is good, as well as 0,0,5,5,0 etc., while 3,3,3,3,3,3 is obviously wrong.
I got this far:
import itertools
t = 5
n = 10
r = [range(n+1)] * t
for x in itertools.product(*r):
if sum(x) <= n:
print x
This brute force approach is incredibly slow though; there must be a better way?
Timings (1000 iterations):
Default (itertools.product) --- time: 40.90 s
falsetru recursion --- time: 3.63 s
Aaron Williams Algorithm (impl, Tony) --- time: 0.37 s
Possible approach follows. Definitely would use with caution (hardly tested at all, but the results on n=10 and t=5 look reasonable).
The approach involves no recursion. The algorithm to generate partitions of a number n (10 in your example) having m elements (5 in your example) comes from Knuth's 4th volume. Each partition is then zero-extended if necessary, and all the distinct permutations are generated using an algorithm from Aaron Williams which I have seen referred to elsewhere. Both algorithms had to be translated to Python, and that increases the chance that errors have crept in. The Williams algorithm wanted a linked list, which I had to fake with a 2D array to avoid writing a linked-list class.
There goes an afternoon!
Code (note your n is my maxn and your t is my p):
import itertools
def visit(a, m):
""" Utility function to add partition to the list"""
x.append(a[1:m+1])
def parts(a, n, m):
""" Knuth Algorithm H, Combinatorial Algorithms, Pre-Fascicle 3B
Finds all partitions of n having exactly m elements.
An upper bound on running time is (3 x number of
partitions found) + m. Not recursive!
"""
while (1):
visit(a, m)
while a[2] < a[1]-1:
a[1] -= 1
a[2] += 1
visit(a, m)
j=3
s = a[1]+a[2]-1
while a[j] >= a[1]-1:
s += a[j]
j += 1
if j > m:
break
x = a[j] + 1
a[j] = x
j -= 1
while j>1:
a[j] = x
s -= x
j -= 1
a[1] = s
def distinct_perms(partition):
""" Aaron Williams Algorithm 1, "Loopless Generation of Multiset
Permutations by Prefix Shifts". Finds all distinct permutations
of a list with repeated items. I don't follow the paper all that
well, but it _possibly_ has a running time which is proportional
to the number of permutations (with 3 shift operations for each
permutation on average). Not recursive!
"""
perms = []
val = 0
nxt = 1
l1 = [[partition[i],i+1] for i in range(len(partition))]
l1[-1][nxt] = None
#print(l1)
head = 0
i = len(l1)-2
afteri = i+1
tmp = []
tmp += [l1[head][val]]
c = head
while l1[c][nxt] != None:
tmp += [l1[l1[c][nxt]][val]]
c = l1[c][nxt]
perms.extend([tmp])
while (l1[afteri][nxt] != None) or (l1[afteri][val] < l1[head][val]):
if (l1[afteri][nxt] != None) and (l1[i][val]>=l1[l1[afteri][nxt]][val]):
beforek = afteri
else:
beforek = i
k = l1[beforek][nxt]
l1[beforek][nxt] = l1[k][nxt]
l1[k][nxt] = head
if l1[k][val] < l1[head][val]:
i = k
afteri = l1[i][nxt]
head = k
tmp = []
tmp += [l1[head][val]]
c = head
while l1[c][nxt] != None:
tmp += [l1[l1[c][nxt]][val]]
c = l1[c][nxt]
perms.extend([tmp])
return perms
maxn = 10 # max integer to find partitions of
p = 5 # max number of items in each partition
# Find all partitions of length p or less adding up
# to maxn or less
# Special cases (Knuth's algorithm requires n and m >= 2)
x = [[i] for i in range(maxn+1)]
# Main cases: runs parts fn (maxn^2+maxn)/2 times
for i in range(2, maxn+1):
for j in range(2, min(p+1, i+1)):
m = j
n = i
a = [0, n-m+1] + [1] * (m-1) + [-1] + [0] * (n-m-1)
parts(a, n, m)
y = []
# For each partition, add zeros if necessary and then find
# distinct permutations. Runs distinct_perms function once
# for each partition.
for part in x:
if len(part) < p:
y += distinct_perms(part + [0] * (p - len(part)))
else:
y += distinct_perms(part)
print(y)
print(len(y))
Make your own recursive function which do not recurse with an element unless it's possible to make a sum <= 10.
def f(r, n, t, acc=[]):
if t == 0:
if n >= 0:
yield acc
return
for x in r:
if x > n: # <---- do not recurse if sum is larger than `n`
break
for lst in f(r, n-x, t-1, acc + [x]):
yield lst
t = 5
n = 10
for xs in f(range(n+1), n, 5):
print xs
You can create all the permutations with itertools, and parse the results with numpy.
>>> import numpy as np
>>> from itertools import product
>>> t = 5
>>> n = 10
>>> r = range(n+1)
# Create the product numpy array
>>> prod = np.fromiter(product(r, repeat=t), np.dtype('u1,' * t))
>>> prod = prod.view('u1').reshape(-1, t)
# Extract only permutations that satisfy a condition
>>> prod[prod.sum(axis=1) < n]
Timeit:
>>> %%timeit
prod = np.fromiter(product(r, repeat=t), np.dtype('u1,' * t))
prod = prod.view('u1').reshape(-1, t)
prod[prod.sum(axis=1) < n]
10 loops, best of 3: 41.6 ms per loop
You could even speed up the product computation by populating combinations directly in numpy.
You could optimize the algorithm using Dynamic Programming.
Basically, have an array a, where a[i][j] means "Can I get a sum of j with the elements up to the j-th element (and using the jth element, assuming you have your elements in an array t (not the number you mentioned)).
Then you can fill the array doing
a[0][t[0]] = True
for i in range(1, len(t)):
a[i][t[i]] = True
for j in range(t[i]+1, n+1):
for k in range(0, i):
if a[k][j-t[i]]:
a[i][j] = True
Then, using this info, you could backtrack the solution :)
def backtrack(j = len(t)-1, goal = n):
print j, goal
all_solutions = []
if j == -1:
return []
if goal == t[j]:
all_solutions.append([j])
for i in range(j-1, -1, -1):
if a[i][goal-t[j]]:
r = backtrack(i, goal - t[j])
for l in r:
print l
l.append(j)
all_solutions.append(l)
all_solutions.extend(backtrack(j-1, goal))
return all_solutions
backtrack() # is the answer