The Problem:
You are given an array m of size n, where each value of m is composed of a weight w, and a percentage p.
m = [m0, m1, m2, ... , mn] = [[m0w, m0p], [m1w, m1p], [m2w, m2p], ..., [mnw, mnp] ]
So we'll represent this in python as a list of lists.
We are then trying to find the minimum value of this function:
# chaima is so fuzzy how come?
def minimize_me(m):
t = 0
w = 1
for i in range(len(m)):
current = m[i]
t += w * current[0]
w *= current[1]
return t
where the only thing we can change about m is its ordering. (i. e. rearrange the elements of m in any way) Additionally, this needs to complete in better than O(n!).
Brute Force Solution:
import itertools
import sys
min_t = sys.maxint
min_permutation = None
for permutation in itertools.permutations(m):
t = minimize_me(list(permutation), 0, 1)
if t < min_t:
min_t = t
min_permutation = list(permutation)
Ideas On How To Optimize:
the idea:
Instead of finding the best order, see if we can find a way to compare two given values in m, when we know the state of the problem. (The code might explain this more clearly). If I can build this using a bottom-up approach (so, starting from the end, assuming I have no optimal solution) and I can create an equation that can compare two values in m and say one is definitively better than the other, then I can construct an optimal solution, by using that new value, and comparing the next set of values of m.
the code:
import itertools
def compare_m(a, b, v):
a_first = b[0] + b[1] * (a[0] + a[1] * v)
b_first = a[0] + a[1] * (b[0] + b[1] * v)
if a_first > b_first:
return a, a_first
else:
return b, b_first
best_ordering = []
v = 0
while len(m) > 1:
best_pair_t = sys.maxint
best_m = None
for pair in itertools.combinations(m, 2):
m, pair_t = compare_m(pair[0], pair[1], v)
if pair_t < best_pair_t:
best_pair_t = pair_t
best_m = m
best_ordering.append(best_m)
m.remove(best_m)
v = best_m[0] + best_m[1] * v
first = m[0]
best_ordering.append(first)
However, this is not working as intended. The first value is always right, and roughly 60-75% of the time, the entire solution is optimal. However, in some cases, it looks like the way I am changing the value v which then gets passed back into my compare is evaluating much higher than it should. Here's the script I'm using to test against:
import random
m = []
for i in range(0, 5):
w = random.randint(1, 1023)
p = random.uniform(0.01, 0.99)
m.append([w, p])
Here's a particular test case demonstrating the error:
m = [[493, 0.7181996086105675], [971, 0.19915848527349228], [736, 0.5184210526315789], [591, 0.5904761904761905], [467, 0.6161290322580645]]
optimal solution (just the indices) = [1, 4, 3, 2, 0]
my solution (just the indices) = [4, 3, 1, 2, 0]
It feels very close, but I cannot for the life of me figure out what is wrong. Am I looking at this the wrong way? Does this seem like it's on the right track? Any help or feedback would be greatly appreciated!
We don't need any information about the current state of the algorithm to decide which elements of m are better. We can just sort the values using the following key:
def key(x):
w, p = x
return w/(1-p)
m.sort(key=key)
This requires explanation.
Suppose (w1, p1) is directly before (w2, p2) in the array. Then after processing these two items, t will be increased by an increment of w * (w1 + p1*w2) and w will be multiplied by a factor of p1*p2. If we switch the order of these items, t will be increased by an increment of w * (w2 + p2*w1) and w will be multiplied by a factor of p1*p2. Clearly, we should perform the switch if (w1 + p1*w2) > (w2 + p2*w1), or equivalently after a little algebra, if w1/(1-p1) > w2/(1-p2). If w1/(1-p1) <= w2/(1-p2), we can say that these two elements of m are "correctly" ordered.
In the optimal ordering of m, there will be no pair of adjacent items worth switching; for any adjacent pair of (w1, p1) and (w2, p2), we will have w1/(1-p1) <= w2/(1-p2). Since the relation of having w1/(1-p1) <= w2/(1-p2) is the natural total ordering on the w/(1-p) values, the fact that w1/(1-p1) <= w2/(1-p2) holds for any pair of adjacent items means that the list is sorted by the w/(1-p) values.
Your attempted solution fails because it only considers what a pair of elements would do to the value of the tail of the array. It doesn't consider the fact that rather than using a low-p element now, to minimize the value of the tail, it might be better to save it for later, so you can apply that multiplier to more elements of m.
Note that the proof of our algorithm's validity relies on all p values being at least 0 and strictly less than 1. If p is 1, we can't divide by 1-p, and if p is greater than 1, dividing by 1-p reverses the direction of the inequality. These problems can be resolved using a comparator or a more sophisticated sort key. If p is less than 0, then w can switch sign, which reverses the logic of what items should be switched. Then we do need to know about the current state of the algorithm to decide which elements are better, and I'm not sure what to do then.
Related
Say I have a list of 10 numbers as below
lst = [1, 3, 6, 10, 15, 20, 27, 28, 30, 40]
I want to get one subset of k numbers, in which every pair of numbers are at least different from each other by a distance d. Right now I am using itertools.combinations, and the code is working fine for small lists.
from itertools import combinations
def k_subset_at_least_d_apart(nums, k, d):
for subset in combinations(nums, k):
a = sorted(subset)
if min([t - s for s, t in zip(a, a[1:])]) >= d:
return subset
return None
For example, if I want to get a subset of 5 with numbers at least 6 apart from each other:
subset = k_subset_at_least_d_apart(lst, k=5, d=6)
print(subset)
# (1, 10, 20, 27, 40)
However, my code becomes too slow when I want to e.g. get a subset of 20 numbers from a list of 50 numbers in which numbers are at least 10 apart. Can anyone suggest a relatively fast algorithm that can first determine whether such subset exists or not, then finds one subset? Thanks in advance.
Sure; you can just greedily repeat the step of taking the smallest valid element:
def k_subset_at_least_d_apart(nums, k, d):
last = -float('inf')
answer = []
for element in sorted(nums):
if element - last >= d:
answer.append(element)
last = element
if len(answer) == k:
return answer
return None
If nums aren't presorted, it's hard to do (asymptotically) much better than this code, which takes O(n log n) time. You can get an O(n*k) algorithm with k repeated passes, which may be faster depending on n and k. If they are sorted, you can do the 'greedily take min valid', but with a binary search to find the next smallest valid element, for an O(k log n) algorithm.
Proof of the greedy algorithm:
Suppose the greedy algorithm gives a solution G = g0, g1, ... gm and an optimal (length-k) solution is given by A = a0, a1, ... a_(k-1), with m <= k-1 (both in sorted, increasing order).
Let i be the smallest index where ai != gi. If i is 0, we must have g0 < a0, since g0 is min(nums), so we can replace a0 with g0 in A for another optimal solution A' = g0, a1, ... a_(k-1). Otherwise, for i > 0, (details left as an exercise, but very similar to above), if a0 == g0, a1 == g1 ... a_(i-1)==g_(i-1), we can also replace ai with gi to get another optimal solution.
Eventually we get that there exists an optimal solution A* such that G is a prefix of A*, then we can argue by contradiction that if G has length below k and is a proper prefix of an optimal solution, the greedy algorithm would have extended G when it saw the element a_(m+1).
Given a vector of numbers v, I can access sums of sections of this vector by using cumulative sums, i.e., instead of O(n)
v = [1,2,3,4,5]
def sum_v(i,j):
return sum(v[i:j])
I can do O(1)
import itertools
v = [1,2,3,4,5]
cache = [0]+list(itertools.accumulate(v))
def sum_v(i,j):
return cache[j] - cache[i]
Now, I need something similar but for pairwise instead of sum_v:
def pairwise(i,j):
ret = 0
for p in range(i,j):
for q in range(p+1,j):
ret += f(v(p),v(q))
return ret
where f is, preferably, something relatively arbitrary (e.g., * or ^ or ...). However, something working for just product or just XOR would be good too.
PS1. I am looking for a speed-up in terms of O, not generic memoization such as functools.cache.
PS2. The question is about algorithms, not implementations, and is thus language-agnostic. I tagged it python only because my examples are in python.
PS3. Obviously, one can precompute all values of pairwise, so the solution should be o(n^2) both in time and space (preferably linear).
For binary operations such as or, and, xor, an O(N) algorithm is possible.
Let's consider XOR for this example, but this can be easily modified for OR/AND as well.
The most important thing to note here is, the result of a binary operator on bit x of two numbers will not affect the result for bit y. (You can easily see that by trying something like 010 ^ 011 = 001. So we first count the contribution made by the leftmost bits of all numbers to the final sum, then the next least significant bit, and so on. Here's a simple algo/pseudocode for that:
Construct a simple table dp, where dp[i][j] = count of numbers in range [i,n) with jth bit set
l = [5,3,1,7,8]
n = len(l)
ans = 0
max_binary_length = max(log2(i) for i in l)+1 #maximum number of bits we need to check
for j in range(max_binary_length):
# we check the jth bits of all numbers here
for i in range(0,n):
# we need sum((l[i]^l[j]) for j in range (i+1,n))
current = l[i]
if jth bit of current == 0:
# since 0^1 = 1, we need count of numbers with jth bit 1
count = dp[i+1][j]
else:
# we need count of numbers with jth bit 0
count = (n-i)-dp[i+1][j]
# the indexing could be slightly off, you can check that once
ans += count * (2^j)
# since we're checking the jth bit, it will have a value of 2^j when set
print(ans)
In most cases, for integers, number of bits <= 32. So this should have a complexity of O(N*log2(max(A[i]))) == O(N*32) == O(N).
In principle, you can always precompute every possible output in Θ(n²) space and then answer queries in Θ(1) by just looking it up in the precomputed table. Everything else is a trade-off depending on the cost of precomputation time, space, and actual computation time; the interesting question is what can be done with o(n²) space, i.e. sub-quadratic. This will generally depend on the application, and on properties of the binary operation f.
In the particular case where f is *, we can get Θ(1) lookups with only Θ(n) space: we'll take advantage that the sum for pairs where p < q equals the sum of all pairs, minus the sum of pairs where p = q, divided by 2 to account for the pairs where p > q.
# input data
v = [1, 2, 3, 4, 5]
n = len(v)
# precomputation
partial_sums = [0] * (n + 1)
partial_sums_squares = [0] * (n + 1)
for i, x in enumerate(v):
partial_sums[i + 1] = partial_sums[i] + x
partial_sums_squares[i + 1] = partial_sums_squares[i] + x * x
# query response
def pairwise(i, j):
s = partial_sums[j] - partial_sums[i]
s2 = partial_sums_squares[j] - partial_sums_squares[i]
return (s * s - s2) / 2
More generally, this works whenever f is commutative and distributes over the accumulator operation (+ in this case). I wrote the example here without itertools, so that it is more easily translatable to other languages, since the question is meant to be language-agnostic.
I'm trying to write the fastest algorithm possible to return the number of "magic triples" (i.e. x, y, z where z is a multiple of y and y is a multiple of x) in a list of 3-2000 integers.
(Note: I believe the list was expected to be sorted and unique but one of the test examples given was [1,1,1] with the expected result of 1 - that is a mistake in the challenge itself though because the definition of a magic triple was explicitly noted as x < y < z, which [1,1,1] isn't. In any case, I was trying to optimise an algorithm for sorted lists of unique integers.)
I haven't been able to work out a solution that doesn't include having three consecutive loops and therefore being O(n^3). I've seen one online that is O(n^2) but I can't get my head around what it's doing, so it doesn't feel right to submit it.
My code is:
def solution(l):
if len(l) < 3:
return 0
elif l == [1,1,1]:
return 1
else:
halfway = int(l[-1]/2)
quarterway = int(halfway/2)
quarterIndex = 0
halfIndex = 0
for i in range(len(l)):
if l[i] >= quarterway:
quarterIndex = i
break
for i in range(len(l)):
if l[i] >= halfway:
halfIndex = i
break
triples = 0
for i in l[:quarterIndex+1]:
for j in l[:halfIndex+1]:
if j != i and j % i == 0:
multiple = 2
while (j * multiple) <= l[-1]:
if j * multiple in l:
triples += 1
multiple += 1
return triples
I've spent quite a lot of time going through examples manually and removing loops through unnecessary sections of the lists but this still completes a list of 2,000 integers in about a second where the O(n^2) solution I found completes the same list in 0.6 seconds - it seems like such a small difference but obviously it means mine takes 60% longer.
Am I missing a really obvious way of removing one of the loops?
Also, I saw mention of making a directed graph and I see the promise in that. I can make the list of first nodes from the original list with a built-in function, so in principle I presume that means I can make the overall graph with two for loops and then return the length of the third node list, but I hit a wall with that too. I just can't seem to make progress without that third loop!!
from array import array
def num_triples(l):
n = len(l)
pairs = set()
lower_counts = array("I", (0 for _ in range(n)))
upper_counts = lower_counts[:]
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[i] += 1
upper_counts[j] += 1
return sum(nx * nz for nz, nx in zip(lower_counts, upper_counts))
Here, lower_counts[i] is the number of pairs of which the ith number is the y, and z is the other number in the pair (i.e. the number of different z values for this y).
Similarly, upper_counts[i] is the number of pairs of which the ith number is the y, and x is the other number in the pair (i.e. the number of different x values for this y).
So the number of triples in which the ith number is the y value is just the product of those two numbers.
The use of an array here for storing the counts is for scalability of access time. Tests show that up to n=2000 it makes negligible difference in practice, and even up to n=20000 it only made about a 1% difference to the run time (compared to using a list), but it could in principle be the fastest growing term for very large n.
How about using itertools.combinations instead of nested for loops? Combined with list comprehension, it's cleaner and much faster. Let's say l = [your list of integers] and let's assume it's already sorted.
from itertools import combinations
def div(i,j,k): # this function has the logic
return l[k]%l[j]==l[j]%l[i]==0
r = sum([div(i,j,k) for i,j,k in combinations(range(len(l)),3) if i<j<k])
#alaniwi provided a very smart iterative solution.
Here is a recursive solution.
def find_magicals(lst, nplet):
"""Find the number of magical n-plets in a given lst"""
res = 0
for i, base in enumerate(lst):
# find all the multiples of current base
multiples = [num for num in lst[i + 1:] if not num % base]
res += len(multiples) if nplet <= 2 else find_magicals(multiples, nplet - 1)
return res
def solution(lst):
return find_magicals(lst, 3)
The problem can be divided into selecting any number in the original list as the base (i.e x), how many du-plets we can find among the numbers bigger than the base. Since the method to find all du-plets is the same as finding tri-plets, we can solve the problem recursively.
From my testing, this recursive solution is comparable to, if not more performant than, the iterative solution.
This answer was the first suggestion by #alaniwi and is the one I've found to be the fastest (at 0.59 seconds for a 2,000 integer list).
def solution(l):
n = len(l)
lower_counts = dict((val, 0) for val in l)
upper_counts = lower_counts.copy()
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[lower] += 1
upper_counts[upper] += 1
return sum((lower_counts[y] * upper_counts[y] for y in l))
I think I've managed to get my head around it. What it is essentially doing is comparing each number in the list with every other number to see if the smaller is divisible by the larger and makes two dictionaries:
One with the number of times a number is divisible by a larger
number,
One with the number of times it has a smaller number divisible by
it.
You compare the two dictionaries and multiply the values for each key because the key having a 0 in either essentially means it is not the second number in a triple.
Example:
l = [1,2,3,4,5,6]
lower_counts = {1:5, 2:2, 3:1, 4:0, 5:0, 6:0}
upper_counts = {1:0, 2:1, 3:1, 4:2, 5:1, 6:3}
triple_tuple = ([1,2,4], [1,2,6], [1,3,6])
You are given four arrays A, B, C, D each of size N.
Find maximum value (M) of given below expression
M = max(|A[i] - A[j]| + |B[i] - B[j]| + |C[i] - C[j]| + |D[i] - D[j]| + |i -j|)
Where 1 <= i < j <= N <br />
and here |x| refers to the absolute value of x.
Constraints
2 <= N <= 10^5
1 <= Ai,Bi,Ci,Di <= 10^9
Input: N,A,B,C,D
Output: M
Ex.-
Input-
5
5,7,6,3,9
7,9,2,7,5
1,9,9,3,3
8,4,1,10,5
Output-
24
Question picture
I have tried this way
def max_value(arr1,arr2,arr3,arr4, n):
res = 0;
# Iterating two for loop,
# one for i and another for j.
for i in range(n):
for j in range(n):
temp= abs(arr1[i] - arr1[j]) + abs(arr2[i] - arr2[j]) + abs(arr3[i] - arr3[j]) + abs(arr4[i] - arr4[j]) + abs(i - j)
if res>temp:
res = res
else:
res = temp
return res;
This is O(n^2).
But I want a better time complexity solution. This will not work for higher values of N.
Here is solution for single array
One can generalize the solution for a single array that you showed. Given a number K of arrays, including the array of indices, one can make 2**K possible combinations of arrays to get rid of the absolute values. It is then easy to just take the max and min of each of these combinations separately and compare them. This is order O(Kn*2^K), much better than the original O(Kn^2) for the values you report.
Here is a code that works on an arbitrary number of input arrays.
import numpy as np
def run(n, *args):
aux = np.arange(n)
K = len(args) + 1
rows = 2 ** K
x = np.zeros((rows, n))
for i in range(rows):
temp = 0
for m, a in enumerate(args):
temp += np.array(a) * ((-1) ** int(f"{i:0{K}b}"[-(1+m)]))
temp += aux * ((-1) ** int(f"{i:0{K}b}"[-K]))
x[i] = temp
x_max = np.max(x, axis=-1)
x_min = np.min(x, axis=-1)
res = np.max(x_max - x_min)
return res
The for loop maybe deserves more explanation: in order to make all possible combinations of absolute values, I assign each combination to an integer and rely on the binary representation of this integer to choose which ones of the K vectors must be taken negative.
Idea for faster solution
If you are only interested in the maximum of M you could search for the minimum and maximum value of A, B,C, D and i-j.Let's say i_Amax is the i index for the maximum of A.
Now you find the value of B[i_Amax], C[i_Amax].... and the same for i_Amin and calculate M with the differences of the max and min value.
You repeated the step before with the index for the maximum value of B, so i_Bmax and calculate M, you repeat until you gone through A,B,C,D and i-j
You now should have five terms and one of them should be the maximum
If you don't have a clear minimum or maximum you have to calculate the indeces for all the possible minimums and maximums.
I think it should find any maximum and is faster than n^2, especially for big n, but I have not implemented it myself, so you have to think it through to check whether I made a logical error and one can not find every maximum with that idea.
I hope that helps!
recently I became interested in the subset-sum problem which is finding a zero-sum subset in a superset. I found some solutions on SO, in addition, I came across a particular solution which uses the dynamic programming approach. I translated his solution in python based on his qualitative descriptions. I'm trying to optimize this for larger lists which eats up a lot of my memory. Can someone recommend optimizations or other techniques to solve this particular problem? Here's my attempt in python:
import random
from time import time
from itertools import product
time0 = time()
# create a zero matrix of size a (row), b(col)
def create_zero_matrix(a,b):
return [[0]*b for x in xrange(a)]
# generate a list of size num with random integers with an upper and lower bound
def random_ints(num, lower=-1000, upper=1000):
return [random.randrange(lower,upper+1) for i in range(num)]
# split a list up into N and P where N be the sum of the negative values and P the sum of the positive values.
# 0 does not count because of additive identity
def split_sum(A):
N_list = []
P_list = []
for x in A:
if x < 0:
N_list.append(x)
elif x > 0:
P_list.append(x)
return [sum(N_list), sum(P_list)]
# since the column indexes are in the range from 0 to P - N
# we would like to retrieve them based on the index in the range N to P
# n := row, m := col
def get_element(table, n, m, N):
if n < 0:
return 0
try:
return table[n][m - N]
except:
return 0
# same definition as above
def set_element(table, n, m, N, value):
table[n][m - N] = value
# input array
#A = [1, -3, 2, 4]
A = random_ints(200)
[N, P] = split_sum(A)
# create a zero matrix of size m (row) by n (col)
#
# m := the number of elements in A
# n := P - N + 1 (by definition N <= s <= P)
#
# each element in the matrix will be a value of either 0 (false) or 1 (true)
m = len(A)
n = P - N + 1;
table = create_zero_matrix(m, n)
# set first element in index (0, A[0]) to be true
# Definition: Q(1,s) := (x1 == s). Note that index starts at 0 instead of 1.
set_element(table, 0, A[0], N, 1)
# iterate through each table element
#for i in xrange(1, m): #row
# for s in xrange(N, P + 1): #col
for i, s in product(xrange(1, m), xrange(N, P + 1)):
if get_element(table, i - 1, s, N) or A[i] == s or get_element(table, i - 1, s - A[i], N):
#set_element(table, i, s, N, 1)
table[i][s - N] = 1
# find zero-sum subset solution
s = 0
solution = []
for i in reversed(xrange(0, m)):
if get_element(table, i - 1, s, N) == 0 and get_element(table, i, s, N) == 1:
s = s - A[i]
solution.append(A[i])
print "Solution: ",solution
time1 = time()
print "Time execution: ", time1 - time0
I'm not quite sure if your solution is exact or a PTA (poly-time approximation).
But, as someone pointed out, this problem is indeed NP-Complete.
Meaning, every known (exact) algorithm has an exponential time behavior on the size of the input.
Meaning, if you can process 1 operation in .01 nanosecond then, for a list of 59 elements it'll take:
2^59 ops --> 2^59 seconds --> 2^26 years --> 1 year
-------------- ---------------
10.000.000.000 3600 x 24 x 365
You can find heuristics, which give you just a CHANCE of finding an exact solution in polynomial time.
On the other side, if you restrict the problem (to another) using bounds for the values of the numbers in the set, then the problem complexity reduces to polynomial time. But even then the memory space consumed will be a polynomial of VERY High Order.
The memory consumed will be much larger than the few gigabytes you have in memory.
And even much larger than the few tera-bytes on your hard drive.
( That's for small values of the bound for the value of the elements in the set )
May be this is the case of your Dynamic programing algorithm.
It seemed to me that you were using a bound of 1000 when building your initialization matrix.
You can try a smaller bound. That is... if your input is consistently consist of small values.
Good Luck!
Someone on Hacker News came up with the following solution to the problem, which I quite liked. It just happens to be in python :):
def subset_summing_to_zero (activities):
subsets = {0: []}
for (activity, cost) in activities.iteritems():
old_subsets = subsets
subsets = {}
for (prev_sum, subset) in old_subsets.iteritems():
subsets[prev_sum] = subset
new_sum = prev_sum + cost
new_subset = subset + [activity]
if 0 == new_sum:
new_subset.sort()
return new_subset
else:
subsets[new_sum] = new_subset
return []
I spent a few minutes with it and it worked very well.
An interesting article on optimizing python code is available here. Basically the main result is that you should inline your frequent loops, so in your case this would mean instead of calling get_element twice per loop, put the actual code of that function inside the loop in order to avoid the function call overhead.
Hope that helps! Cheers
, 1st eye catch
def split_sum(A):
N_list = 0
P_list = 0
for x in A:
if x < 0:
N_list+=x
elif x > 0:
P_list+=x
return [N_list, P_list]
Some advices:
Try to use 1D list and use bitarray to reduce memory footprint at minimum (http://pypi.python.org/pypi/bitarray) so you will just change get / set functon. This should reduce your memory footprint by at lest 64 (integer in list is pointer to integer whit type so it can be factor 3*32)
Avoid using try - catch, but figure out proper ranges at beginning, you might found out that you will gain huge speed.
The following code works for Python 3.3+ , I have used the itertools module in Python that has some great methods to use.
from itertools import chain, combinations
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
nums = input("Enter the Elements").strip().split()
inputSum = int(input("Enter the Sum You want"))
for i, combo in enumerate(powerset(nums), 1):
sum = 0
for num in combo:
sum += int(num)
if sum == inputSum:
print(combo)
The Input Output is as Follows:
Enter the Elements 1 2 3 4
Enter the Sum You want 5
('1', '4')
('2', '3')
Just change the values in your set w and correspondingly make an array x as big as the len of w then pass the last value in the subsetsum function as the sum for which u want subsets and you wl bw done (if u want to check by giving your own values).
def subsetsum(cs,k,r,x,w,d):
x[k]=1
if(cs+w[k]==d):
for i in range(0,k+1):
if x[i]==1:
print (w[i],end=" ")
print()
elif cs+w[k]+w[k+1]<=d :
subsetsum(cs+w[k],k+1,r-w[k],x,w,d)
if((cs +r-w[k]>=d) and (cs+w[k]<=d)) :
x[k]=0
subsetsum(cs,k+1,r-w[k],x,w,d)
#driver for the above code
w=[2,3,4,5,0]
x=[0,0,0,0,0]
subsetsum(0,0,sum(w),x,w,7)