How to sum elements of the rows of a lattice periodically - python

Suppose I have a lattice
a = np.array([[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3],
[4, 4, 4, 4]])
I'd like to make a function func(lattice, start, end) that takes in 3 inputs, where start and end are the index of rows for which the function would sum the elements. For example, for func(a,1,3) it'll sum all the elements of those rows such that func(a,1,3) = 2+2+2+2+3+3+3+3+4+4+4+4 = 36.
Now I know this can be done easily with slicing and np.sum() whatever. But crucially what I want func to do is to also have the ability to wrap around. Namely func(a,2,4) should return 3+3+3+3+4+4+4+4+1+1+1+1.
Couple more examples would be
func(a,3,4) = 4+4+4+4+1+1+1+1
func(a,3,5) = 4+4+4+4+1+1+1+1+2+2+2+2
func(a,0,1) = 1+1+1+1+2+2+2+2
In my situation I'm never gonna get to a point where it'll sum the whole thing again i.e.
func(a,3,6) = sum of all elements
Update:
For my algorithm
for i in range(MC_STEPS_NODE):
sweep(lattice, prob, start_index_master, end_index_master,
rows_for_master)
# calculate the energy
Ene = subhamiltonian(lattice, start_index_master, end_index_master)
# calculate the magnetisation
Mag = mag(lattice, start_index_master, end_index_master)
E1 += Ene
M1 += Mag
E2 += Ene*Ene
M2 += Mag*Mag
if i % sites_for_master == 0:
comm.Send([lattice[start_index_master:start_index_master+1], L, MPI.INT],
dest=(rank-1)%size, tag=4)
comm.Recv([lattice[end_index_master:end_index_master+1], L, MPI.INT],
source = (rank+1)%size, tag=4)
start_index_master = (start_index_master + 1)
end_index_master = (end_index_master + 1)
if start_index_master > 100:
start_index_master = start_index_master % L
if end_index_master > 100:
end_index_master = end_index_master % L
The function I want is the mag() function which calculates the magnetisation of a sublattice which are just sum of all its elements. Imagine a LxL lattice split up into two sublattices, one belongs to the master and the other to the worker. Each sweep sweeps the corresponding sublattice of lattice with start_index_master and end_index_master determining the start and end row of the sublattice. For every i%sites_for_master = 0, the indices move down by adding 1, eventually mod by 100 to prevent memory overflow in mpi4py. So you can imagine if the sublattice is at the centre of the main lattice then start_index_master < end_index_master. Eventually the sublattice will keep moving down to the point where start_index_master < end_index_master where end_index_master > L, so in this case if start_index_master = 10 for a lattice L=10, the most bottom row of the sublattice is the first row ([0]) of the main lattice.
Energy function:
def subhamiltonian(lattice: np.ndarray, col_len_start: int,
col_len_end: int) -> float:
energy = 0
for i in range(col_len_start, col_len_end+1):
for j in range(len(lattice)):
spin = lattice[i%L, j]
nb_sum = lattice[(i%L+1) % L, j] + lattice[i%L, (j+1) % L] + \
lattice[(i%L-1) % L, j] + lattice[i%L, (j-1) % L]
energy += -nb_sum*spin
return energy/4.
This is my function for computing the energy of the sublattice.

You could use np.arange to create the indexes to be summed.
>>> def func(lattice, start, end):
... rows = lattice.shape[0]
... return lattice[np.arange(start, end+1) % rows].sum()
...
>>> func(a,3,4)
20
>>> func(a,3,5)
28
>>> func(a,0,1)
12

You can check if the stop index wraps-around and if it does add the sum from the beginning of the array to the result. This is efficient because it relies on slice indexing and only does extra work if necessary.
def func(a, start, stop):
stop += 1
result = np.sum(a[start:stop])
if stop > len(a):
result += np.sum(a[:stop % len(a)])
return result
The above version works for stop - start < len(a), i.e. no more than one full wrap-around. For an arbitrary number of wrap-around (i.e. arbitrary values for start and stop) the following version can be used:
def multi_wraps(a, start, stop):
result = 0
# Adjust both indices in case the start index wrapped around.
stop -= (start // len(a)) * len(a)
start %= len(a)
stop += 1 # Include the element pointed to by the stop index.
n_wraps = (stop - start) // len(a)
if n_wraps > 0:
result += n_wraps * a.sum()
stop = start + (stop - start) % len(a)
result += np.sum(a[start:stop])
if stop > len(a):
result += np.sum(a[:stop % len(a)])
return result
In case n_wraps > 0 some parts of the array will be summed twice which is unnecessarily inefficient, so we can compute the sum of the various array parts as necessary. The following version sums every array element at most once:
def multi_wraps_efficient(a, start, stop):
# Adjust both indices in case the start index wrapped around.
stop -= (start // len(a)) * len(a)
start %= len(a)
stop += 1 # Include the element pointed to by the stop index.
n_wraps = (stop - start) // len(a)
stop = start + (stop - start) % len(a) # Eliminate the wraps since they will be accounted for separately.
tail_sum = a[start:stop].sum()
if stop > len(a):
head_sum = a[:stop % len(a)].sum()
if n_wraps > 0:
remaining_sum = a[stop % len(a):start].sum()
elif n_wraps > 0:
head_sum = a[:start].sum()
remaining_sum = a[stop:].sum()
result = tail_sum
if stop > len(a):
result += head_sum
if n_wraps > 0:
result += n_wraps * (head_sum + tail_sum + remaining_sum)
return result
The following plot shows a performance comparison between using index arrays and the two multi-wrap methods presented above. The tests are run on a (1_000, 1_000) lattice. One can observe that for the multi_wraps method there is an increase in runtime when going from 1 to 2 wrap-around since it unnecessarily sums the array twice. The multi_wraps_efficient method has the same performance irregardless of the number of wrap-around since it sums every array element no more than once.
The performance plot was generated using the perfplot package:
perfplot.show(
setup=lambda n: (np.ones(shape=(1_000, 1_000), dtype=int), 400, n*1_000 + 200),
kernels=[
lambda x: index_arrays(*x),
lambda x: multi_wraps(*x),
lambda x: multi_wraps_efficient(*x),
],
labels=['index_arrays', 'multi_wraps', 'multi_wraps_efficient'],
n_range=range(1, 11),
xlabel="Number of wrap-around",
equality_check=lambda x, y: x == y,
)

Related

Empirically measuring time complexity of QuickSelect

I know that the time complexity should be O(N). However, when I'm testing it empirically, I get weird results. Can somebody please explain what's going on?
def insertPivot(array, start, end):
pivot = end
i = start
j = end - 1
while i < j:
while array[i] < array[pivot] and i < j:
i += 1
while array[j] > array[pivot] and j > i:
j -= 1
array[i], array[j] = array[j], array[i]
if array[i] > array[pivot]:
array[i], array[pivot] = array[pivot], array[i]
pivot = i
return pivot
def quickselect(array, k):
start = 0
end = len(array) - 1
pivot = insertPivot(array, start, end)
while pivot != k - 1:
if pivot < k - 1:
start, end = pivot, end
else:
start, end = start, pivot - 1
pivot = insertPivot(array, start, end)
return array[k - 1]
And here's how I'm getting my measurements
import random
import timeit
import numpy as np
av_times = dict()
for n in [10, 100, 500, 1000, 5000, 10000]:
times = list()
array = list(range(n))
for _ in range(10):
random.shuffle(array)
k = random.randint(0, n)
times.append(
timeit.timeit(lambda: quickselect(array, k), number=10)
)
av_times[n] = sum(times) / len(times)
xx, yy = zip(*av_times.items())
xx, yy = np.log(xx), np.log(yy)
m, b = np.polyfit(xx, yy, 1)
The slope coefficient m is 1.5, which suggests that the time complexity is O(N*sqrt(N))
insertPivot is of O(N) complexity indeed, since you increase i and decrease j until j is no longer greater than i. However, insertPivot is embedded into a while cycle inside quickselect. So, whatever the complexity of quickselect, that multiplies the complexity of insertPivot, since an algorithm of O(n) complexity is executed on each step of your loop. If pivot < k - 1, then you increase the left boundary of your interval. Otherwise you decrease to pivot - 1. So, in the loop you decrease the interval's size on each step by the difference between its left of right edge and pivot. Depending on what function you can use to approximate the number of steps you can determine what to multiply your N in the linear complexity with, resulting in the actual complexity.

Incorrect indexing for max subarray in Python

I wrote both a brute-force and a divide-and-conquer implementation of the Max Subarray problem in Python. Tests are run by drawing a random sample of integers.
When the length of the input array is large, the assert in __main__ fails because the recursive algorithm does not return the correct answer. However, the two algorithms DO agree when the array is less than 10 elements long (this is approximate, and the actual size of the failed input varies on each execution). The issue does not seem to be related to even or odd array lengths, but it does appear to be related to how the array is indexed.
Sorry if I'm missing something stupid, but why does the recursive algorithm stop returning the correct output when the input array starts getting larger?
# Subarray solutions are represented by an array in the form
# [lower_bound, higher_bound, sum]
from sys import maxsize
import random
import time
# Brute force implementation (THETA(n^2))
def bf_max_subarray(A):
biggest = -maxsize - 1
left = 0
right = 0
for i in range(0, len(A)):
sum = 0
for j in range(i, len(A)):
sum += A[j]
if sum > biggest:
biggest = sum
left = i
right = j
return [left, right, biggest]
# Part of divide-and-conquer solution
def cross_subarray(A, l, m, r):
lsum = -maxsize - 1
rsum = -maxsize - 1
lbound = 0
rbound = 0
tempsum = 0
for i in range(m, l-1, -1):
tempsum += A[i]
if tempsum > lsum:
lsum = tempsum
lbound = i
tempsum = 0
for j in range(m+1, r+1):
tempsum += A[j]
if tempsum > rsum:
rsum = tempsum
rbound = j
return [lbound, rbound, lsum + rsum]
# Recursive solution
def rec_max_subarray(A, l, r):
# Base case: array of one element
if (l == r):
return [l, r, A[l]]
else:
m = (l+r)//2
left = rec_max_subarray(A, l, m)
right = rec_max_subarray(A, m+1, r)
cross = cross_subarray(A, l, m, r)
# Returns the array representing the subarray with the maximum sum.
return max([left, right, cross], key=lambda i:i[2])
if __name__ == "__main__":
for i in range(1, 101):
A = random.sample(range(-i*2, i), i)
start = time.clock()
bf = bf_max_subarray(A)
bf_time = time.clock() - start
start = time.clock()
dc = rec_max_subarray(A, 0, len(A)-1)
dc_time = time.clock() - start
assert dc == bf # Make sure the algorithms agree.
The subarray with the maximum sum is represented by an array of the form [left_bound, right_bound, sum].
But thanks toreturn max([left, right, cross], key=lambda i:i[2]), rec_max_subarray returns the correct maximum sum for A, but risks returning indicies that do not match the indicies returned in bf_max_subarray. My error was assuming that the boundaries of a subarray with the maximum sum would be unique.
The solution is to either fix the criteria that selects a subarray, or just to assert the equality of the sums using assert dc[2] == bf[2].

Find subset with K elements that are closest to eachother

Given an array of integers size N, how can you efficiently find a subset of size K with elements that are closest to each other?
Let the closeness for a subset (x1,x2,x3,..xk) be defined as:
2 <= N <= 10^5
2 <= K <= N
constraints: Array may contain duplicates and is not guaranteed to be sorted.
My brute force solution is very slow for large N, and it doesn't check if there's more than 1 solution:
N = input()
K = input()
assert 2 <= N <= 10**5
assert 2 <= K <= N
a = []
for i in xrange(0, N):
a.append(input())
a.sort()
minimum = sys.maxint
startindex = 0
for i in xrange(0,N-K+1):
last = i + K
tmp = 0
for j in xrange(i, last):
for l in xrange(j+1, last):
tmp += abs(a[j]-a[l])
if(tmp > minimum):
break
if(tmp < minimum):
minimum = tmp
startindex = i #end index = startindex + K?
Examples:
N = 7
K = 3
array = [10,100,300,200,1000,20,30]
result = [10,20,30]
N = 10
K = 4
array = [1,2,3,4,10,20,30,40,100,200]
result = [1,2,3,4]
Your current solution is O(NK^2) (assuming K > log N). With some analysis, I believe you can reduce this to O(NK).
The closest set of size K will consist of elements that are adjacent in the sorted list. You essentially have to first sort the array, so the subsequent analysis will assume that each sequence of K numbers is sorted, which allows the double sum to be simplified.
Assuming that the array is sorted such that x[j] >= x[i] when j > i, we can rewrite your closeness metric to eliminate the absolute value:
Next we rewrite your notation into a double summation with simple bounds:
Notice that we can rewrite the inner distance between x[i] and x[j] as a third summation:
where I've used d[l] to simplify the notation going forward:
Notice that d[l] is the distance between each adjacent element in the list. Look at the structure of the inner two summations for a fixed i:
j=i+1 d[i]
j=i+2 d[i] + d[i+1]
j=i+3 d[i] + d[i+1] + d[i+2]
...
j=K=i+(K-i) d[i] + d[i+1] + d[i+2] + ... + d[K-1]
Notice the triangular structure of the inner two summations. This allows us to rewrite the inner two summations as a single summation in terms of the distances of adjacent terms:
total: (K-i)*d[i] + (K-i-1)*d[i+1] + ... + 2*d[K-2] + 1*d[K-1]
which reduces the total sum to:
Now we can look at the structure of this double summation:
i=1 (K-1)*d[1] + (K-2)*d[2] + (K-3)*d[3] + ... + 2*d[K-2] + d[K-1]
i=2 (K-2)*d[2] + (K-3)*d[3] + ... + 2*d[K-2] + d[K-1]
i=3 (K-3)*d[3] + ... + 2*d[K-2] + d[K-1]
...
i=K-2 2*d[K-2] + d[K-1]
i=K-1 d[K-1]
Again, notice the triangular pattern. The total sum then becomes:
1*(K-1)*d[1] + 2*(K-2)*d[2] + 3*(K-3)*d[3] + ... + (K-2)*2*d[K-2]
+ (K-1)*1*d[K-1]
Or, written as a single summation:
This compact single summation of adjacent differences is the basis for a more efficient algorithm:
Sort the array, order O(N log N)
Compute the differences of each adjacent element, order O(N)
Iterate over each N-K sequence of differences and calculate the above sum, order O(NK)
Note that the second and third step could be combined, although with Python your mileage may vary.
The code:
def closeness(diff,K):
acc = 0.0
for (i,v) in enumerate(diff):
acc += (i+1)*(K-(i+1))*v
return acc
def closest(a,K):
a.sort()
N = len(a)
diff = [ a[i+1] - a[i] for i in xrange(N-1) ]
min_ind = 0
min_val = closeness(diff[0:K-1],K)
for ind in xrange(1,N-K+1):
cl = closeness(diff[ind:ind+K-1],K)
if cl < min_val:
min_ind = ind
min_val = cl
return a[min_ind:min_ind+K]
itertools to the rescue?
from itertools import combinations
def closest_elements(iterable, K):
N = set(iterable)
assert(2 <= K <= len(N) <= 10**5)
combs = lambda it, k: combinations(it, k)
_abs = lambda it: abs(it[0] - it[1])
d = {}
v = 0
for x in combs(N, K):
for y in combs(x, 2):
v += _abs(y)
d[x] = v
v = 0
return min(d, key=d.get)
>>> a = [10,100,300,200,1000,20,30]
>>> b = [1,2,3,4,10,20,30,40,100,200]
>>> print closest_elements(a, 3); closest_elements(b, 4)
(10, 20, 30) (1, 2, 3, 4)
This procedure can be done with O(N*K) if A is sorted. If A is not sorted, then the time will be bounded by the sorting procedure.
This is based on 2 facts (relevant only when A is ordered):
The closest subsets will always be subsequent
When calculating the closeness of K subsequent elements, the sum of distances can be calculated as the sum of each two subsequent elements time (K-i)*i where i is 1,...,K-1.
When iterating through the sorted array, it is redundant to recompute the entire sum, we can instead remove K times the distance between the previously two smallest elements, and add K times the distance of the two new largest elements. this fact is being used to calculate the closeness of a subset in O(1) by using the closeness of the previous subset.
Here's the pseudo-code
List<pair> FindClosestSubsets(int[] A, int K)
{
List<pair> minList = new List<pair>;
int minVal = infinity;
int tempSum;
int N = A.length;
for (int i = K - 1; i < N; i++)
{
tempSum = 0;
for (int j = i - K + 1; j <= i; j++)
tempSum += (K-i)*i * (A[i] - A[i-1]);
if (tempSum < minVal)
{
minVal = tempSum;
minList.clear();
minList.add(new pair(i-K, i);
}
else if (tempSum == minVal)
minList.add(new pair(i-K, i);
}
return minList;
}
This function will return a list of pairs of indexes representing the optimal solutions (the starting and ending index of each solution), it was implied in the question that you want to return all solutions of the minimal value.
try the following:
N = input()
K = input()
assert 2 <= N <= 10**5
assert 2 <= K <= N
a = some_unsorted_list
a.sort()
cur_diff = sum([abs(a[i] - a[i + 1]) for i in range(K - 1)])
min_diff = cur_diff
min_last_idx = K - 1
for last_idx in range(K,N):
cur_diff = cur_diff - \
abs(a[last_idx - K - 1] - a[last_idx - K] + \
abs(a[last_idx] - a[last_idx - 1])
if min_diff > cur_diff:
min_diff = cur_diff
min_last_idx = last_idx
From the min_last_idx, you can calculate the min_first_idx. I use range to preserve the order of idx. If this is python 2.7, it will take linearly more RAM. This is the same algorithm that you use, but slightly more efficient (smaller constant in complexity), as it does less then summing all.
After sorting, we can be sure that, if x1, x2, ... xk are the solution, then x1, x2, ... xk are contiguous elements, right?
So,
take the intervals between numbers
sum these intervals to get the intervals between k numbers
Choose the smallest of them
My initial solution was to look through all the K element window and multiply each element by m and take the sum in that range, where m is initialized by -(K-1) and incremented by 2 in each step and take the minimum sum from the entire list. So for a window of size 3, m is -2 and the values for the range will be -2 0 2. This is because I observed a property that each element in the K window add a certain weight to the sum. For an example if the elements are [10 20 30] the sum is (30-10) + (30-20) + (20-10). So if we break down the expression we have 2*30 + 0*20 + (-2)*10. This can be achieved in O(n) time and the entire operation would be in O(NK) time. However it turns out that this solution is not optimal, and there are certain edge cases where this algorithm fails. I am yet to figure out those cases, but shared the solution anyway if anyone can figure out something useful from it.
for(i = 0 ;i <= n - k;++i)
{
diff = 0;
l = -(k-1);
for(j = i;j < i + k;++j)
{
diff += a[j]*l;
if(min < diff)
break;
l += 2;
}
if(j == i + k && diff > 0)
min = diff;
}
You can do this is O(n log n) time with a sliding window approach (O(n) if the array is already sorted).
First, suppose we've precomputed, at every index i in our array, the sum of distances from A[i] to the previous k-1 elements. The formula for that would be
(A[i] - A[i-1]) + (A[i] - A[i-2]) + ... + (A[i] - A[i-k+1]).
If i is less than k-1, we just compute the sum to the array boundary.
Suppose we also precompute, at every index i in our array, the sum of distances from A[i] to the next k-1 elements. Then we could solve the whole problem with a single pass of a sliding window.
If our sliding window is on [L, L+k-1] with closeness sum S, then the closeness sum for the interval [L+1, L+k] is just S - dist_sum_to_next[L] + dist_sum_to_prev[L+k]. The only changes in the sum of pairwise distances are removing all terms involving A[L] when it leaves our window, and adding all terms involving A[L+k] as it enters our window.
The only remaining part is how to compute, at a position i, the sum of distances between A[i] and the previous k-1 elements (the other computation is totally symmetric). If we know the distance sum at i-1, this is easy: subtract the distance from A[i-1] to A[i-k], and add in the extra distance from A[i-1] to A[i] k-1 times
dist_sum_to_prev[i] = (dist_sum_to_prev[i - 1] - (A[i - 1] - A[i - k])
+ (A[i] - A[i - 1]) * (k - 1)
Python code:
def closest_subset(nums: List[int], k: int) -> List[int]:
"""Given a list of n (poss. unsorted and non-unique) integers nums,
returns a (sorted) list of size k that minimizes the sum of pairwise
distances between all elements in the list.
Runs in O(n lg n) time, uses O(n) auxiliary space.
"""
n = len(nums)
assert len(nums) == n
assert 2 <= k <= n
nums.sort()
# Sum of pairwise distances to the next (at most) k-1 elements
dist_sum_to_next = [0] * n
# Sum of pairwise distances to the last (at most) k-1 elements
dist_sum_to_prev = [0] * n
for i in range(1, n):
if i >= k:
dist_sum_to_prev[i] = ((dist_sum_to_prev[i - 1] -
(nums[i - 1] - nums[i - k]))
+ (nums[i] - nums[i - 1]) * (k - 1))
else:
dist_sum_to_prev[i] = (dist_sum_to_prev[i - 1]
+ (nums[i] - nums[i - 1]) * i)
for i in reversed(range(n - 1)):
if i < n - k:
dist_sum_to_next[i] = ((dist_sum_to_next[i + 1]
- (nums[i + k] - nums[i + 1]))
+ (nums[i + 1] - nums[i]) * (k - 1))
else:
dist_sum_to_next[i] = (dist_sum_to_next[i + 1]
+ (nums[i + 1] - nums[i]) * (n-i-1))
best_sum = math.inf
curr_sum = 0
answer_right_bound = 0
for i in range(n):
curr_sum += dist_sum_to_prev[i]
if i >= k:
curr_sum -= dist_sum_to_next[i - k]
if curr_sum < best_sum and i >= k - 1:
best_sum = curr_sum
answer_right_bound = i
return nums[answer_right_bound - k + 1:answer_right_bound + 1]

Finding a minimal subarray of n integers of sum >= k in linear time

Recently I've been struggling with the following problem:
Given an array of integers, find a minimal (shortest length) subarray that sums to at least k.
Obviously this can easily be done in O(n^2). I was able to write an algorithm that solves it in linear time for natural numbers, but I can't figure it out for integers.
My latest attempt was this:
def find_minimal_length_subarr_z(arr, min_sum):
found = False
start = end = cur_end = cur_sum = 0
for cur_start in range(len(arr)):
if cur_end <= cur_start:
cur_end, cur_sum = cur_start, arr[cur_start]
else:
cur_sum -= arr[cur_start-1]
# Expand
while cur_sum < min_sum and cur_end < len(arr)-1:
cur_end += 1
cur_sum += arr[cur_end]
# Contract
while cur_end > cur_start:
new_sum = cur_sum - arr[cur_end]
if new_sum >= min_sum or new_sum >= cur_sum:
cur_end -= 1
cur_sum = new_sum
else:
break
if cur_sum >= min_sum and (not found or cur_end-cur_start < end-start):
start, end, found = cur_start, cur_end, True
if found:
return start, end
For example:
[8, -7, 5, 5, 4], 12 => (2, 4)
However, it fails for:
[-12, 2, 2, -12, 2, 0], 4
where the correct result is (1, 2) but the algorithm doesn't find it.
Can this at all be done in linear time (with preferably constant space complexity)?
Here's one that's linear time but also linear space. The extra space comes from a deque that could grow to linear size. (There's also a second array to maintain cumulative sums, but that could be removed pretty easily.)
from collections import deque
def find_minimal_length_subarr(arr, k):
# assume k is positive
sumBefore = [0]
for x in arr: sumBefore.append(sumBefore[-1] + x)
bestStart = -1
bestEnd = len(arr)
startPoints = deque()
start = 0
for end in range(len(arr)):
totalToEnd = sumBefore[end+1]
while startPoints and totalToEnd - sumBefore[startPoints[0]] >= k: # adjust start
start = startPoints.popleft()
if totalToEnd - sumBefore[start] >= k and end-start < bestEnd-bestStart:
bestStart,bestEnd = start,end
while startPoints and totalToEnd <= sumBefore[startPoints[-1]]: # remove bad candidates
startPoints.pop()
startPoints.append(end+1) # end+1 is a new candidate
return (bestStart,bestEnd)
The deque holds a sequence of candidate start positions from left to right. The key invariant is that positions in the deque are also sorted by increasing value of "sumBefore".
To see why, consider two positions x and y with x > y, and suppose sumBefore[x] <= sumBefore[y]. Then x is a strictly better start position than y (for segments ending at x or later), so we need never consider y again.
FURTHER EXPLANATION:
Imagine a naive algorithm that looked like this:
for end in 0..N-1
for start in 0..end
check the segment from start to end
I'm try to improve the inner loop to only consider certain start points instead of all possible start points. So when can we eliminate a particular start point from further consideration? In two situations. Consider two start points S0 and S1 with S0 to the left of S1.
First, we can eliminate S0 if we ever find that S1 begins an eligible segment (that is, a segment summing to at least k). That's what the first while loop does, where start is S0 and startPoints[0] is S1. Even if we found some future eligible segment starting at S0, it would be longer than the segment we already found starting at S1.
Second, we can eliminate S0 if the sum of the elements from S0 to S1-1 is <= 0 (or, equivalently if the sum of the elements before S0 >= the sum of the elements before S1). This is what the second while loop does, where S0 is startPoints[-1] and S1 is end+1. Trimming off the elements from S0 to S1-1 always makes sense (for end points at S1 or later), because it makes the segment shorter without decreasing its sum.
Actually, there's a third situation where we could eliminate S0: when the distance from S0 to end is greater than the length of the shortest segment found so far. I didn't implement this case because it wasn't needed.
Here you have a pseudo-code delivering the solution you are looking for.
curIndex = 0
while (curIndex <= endIndex)
{
if(curSum == 0)
{
startIndex = curIndex
}
curSum = curSum + curVal
curTot = curTot + 1
if(curSum >= targetVal AND curTot < minTotSofar)
{
maxSumSofar = curSum
maxStartIndex = startIndex
maxEndIndex = curIndex
minTotSofar = curTot
if(curTot == 1)
{
exit_loop
}
curSum = 0
curTot = 0
curIndex = startIndex
}
else if(curIndex == endIndex)
{
if(maxSumSofar == 0 AND curSum >= targetValue)
{
maxSumSofar = curSum
maxStartIndex = startIndex
maxEndIndex = curIndex
minTotSofar = curTot
}
else if(curSum < targetValue AND startIndex < endIndex)
{
curSum = 0
curTot = 0
curIndex = startIndex
}
}
curIndex = curIndex + 1
}
------------ UPDATE AFTER JWPAT7 SUGGESTION
INPUTS: array of integers, indexed from 0 to endIndex. Target value (k) to compare with (targetVal).
OUTPUTS: final addition of the chosen subset (maxSumSoFar), start index of the subset (maxStartIndex), end index of the subset (maxEndIndex), total number of elements in the subset (minTotSofar).

Why is this python code taking so long?

Alright, I have this python code that compares merge sort and selection sort, but it is taking forever. When done from n = 0 to 90,000 (the size of the list), it only takes about 3 seconds to sort the list. By this logic, it would take about 10 * 3 * 9 seconds (number of run throughs * duration * incremented run throughs [ we start with 10,000 then do 20,000, then 30,000, etc ] ). However, it takes far longer than that.
import time
import random
# Selection Sort Code #
def maxIndex(J):
return J.index(max(J))
def swap(LCopy, i, j):
temp = LCopy[i]
LCopy[i] = LCopy[j]
LCopy[j] = temp
# Implementation of selection sort
def selectionSort(L):
for i in range(len(L)-1, 1, -1):
j = maxIndex(L[0:i+1])
swap(L, i, j)
# Merge Sort Code #
# Assumes that L[first:mid+1] is sorted and also
# that L[mid: last+1] is sorted. Returns L with L[first: last+1] sorted
def merge(L, first, mid, last):
i = first # index into the first half
j = mid + 1 # index into the second half
tempList = []
# This loops goes on as long as BOTH i and j stay within their
# respective sorted blocks
while (i <= mid) and (j <= last):
if L[i] <= L[j]:
tempList.append(L[i])
#print L[i], "from the first block"
i += 1
else:
tempList.append(L[j])
#print L[j], "from the second block"
j += 1
# If i goes beyond the first block, there may be some elements
# in the second block that need to be copied into tempList.
# Similarly, if j goes beyond the second block, there may be some
# elements in the first block that need to be copied into tempList
if i == mid + 1:
tempList.extend(L[j:last+1])
#print L[j:last+1], "some elements in second block are left over"
elif j == last+1:
tempList.extend(L[i:mid+1])
#print L[i:mid+1], "some elements from first block are left over"
L[first:last+1] = tempList
#print tempList
# The merge sort function; sorts the sublist L[first:last+1]
def generalMergeSort(L, first, last):
# Base case: if first == last then it is already sorted
# Recursive case: L[first:last+1] has size 2 or more
if first < last:
# divide step
mid = (first + last)/2
# conquer step
generalMergeSort(L, first, mid)
generalMergeSort(L, mid+1, last)
# combine step
merge(L, first, mid, last)
# Wrapper function
def mergeSort(L):
generalMergeSort(L, 0, len(L)-1)
m = 10
n = 100000
n_increments = 9
baseList = [ random.randint(0,100) for r in range(n) ]
i = 0
while i < n_increments:
j = 0
sel_time = 0
mer_time = 0
while j < m:
# Do a Selection Sort #
x = time.clock()
selectionSort( baseList)
y = time.clock()
sel_time += ( y - x )
random.shuffle( baseList )
# Do a Merge Sort #
x = time.clock()
mergeSort( baseList )
y = time.clock()
mer_time += ( y - x )
random.shuffle( baseList )
j += 1
print "average select sort time for a list of", n, "size:", sel_time / m
print "average merge sort time for a list of", n, "size:", mer_time / m
j = 0
i += 1
n += 10000
Because you are using O(n^2) sorting algorithms. This means that if you double n, the algorithm takes 4 times longer to run. Note that you are starting at 100,000 not 10,000

Categories

Resources