You are given four arrays A, B, C, D each of size N.
Find maximum value (M) of given below expression
M = max(|A[i] - A[j]| + |B[i] - B[j]| + |C[i] - C[j]| + |D[i] - D[j]| + |i -j|)
Where 1 <= i < j <= N <br />
and here |x| refers to the absolute value of x.
Constraints
2 <= N <= 10^5
1 <= Ai,Bi,Ci,Di <= 10^9
Input: N,A,B,C,D
Output: M
Ex.-
Input-
5
5,7,6,3,9
7,9,2,7,5
1,9,9,3,3
8,4,1,10,5
Output-
24
Question picture
I have tried this way
def max_value(arr1,arr2,arr3,arr4, n):
res = 0;
# Iterating two for loop,
# one for i and another for j.
for i in range(n):
for j in range(n):
temp= abs(arr1[i] - arr1[j]) + abs(arr2[i] - arr2[j]) + abs(arr3[i] - arr3[j]) + abs(arr4[i] - arr4[j]) + abs(i - j)
if res>temp:
res = res
else:
res = temp
return res;
This is O(n^2).
But I want a better time complexity solution. This will not work for higher values of N.
Here is solution for single array
One can generalize the solution for a single array that you showed. Given a number K of arrays, including the array of indices, one can make 2**K possible combinations of arrays to get rid of the absolute values. It is then easy to just take the max and min of each of these combinations separately and compare them. This is order O(Kn*2^K), much better than the original O(Kn^2) for the values you report.
Here is a code that works on an arbitrary number of input arrays.
import numpy as np
def run(n, *args):
aux = np.arange(n)
K = len(args) + 1
rows = 2 ** K
x = np.zeros((rows, n))
for i in range(rows):
temp = 0
for m, a in enumerate(args):
temp += np.array(a) * ((-1) ** int(f"{i:0{K}b}"[-(1+m)]))
temp += aux * ((-1) ** int(f"{i:0{K}b}"[-K]))
x[i] = temp
x_max = np.max(x, axis=-1)
x_min = np.min(x, axis=-1)
res = np.max(x_max - x_min)
return res
The for loop maybe deserves more explanation: in order to make all possible combinations of absolute values, I assign each combination to an integer and rely on the binary representation of this integer to choose which ones of the K vectors must be taken negative.
Idea for faster solution
If you are only interested in the maximum of M you could search for the minimum and maximum value of A, B,C, D and i-j.Let's say i_Amax is the i index for the maximum of A.
Now you find the value of B[i_Amax], C[i_Amax].... and the same for i_Amin and calculate M with the differences of the max and min value.
You repeated the step before with the index for the maximum value of B, so i_Bmax and calculate M, you repeat until you gone through A,B,C,D and i-j
You now should have five terms and one of them should be the maximum
If you don't have a clear minimum or maximum you have to calculate the indeces for all the possible minimums and maximums.
I think it should find any maximum and is faster than n^2, especially for big n, but I have not implemented it myself, so you have to think it through to check whether I made a logical error and one can not find every maximum with that idea.
I hope that helps!
Related
We are given a number N and we have to find pairs i and j where i^3=j^2
For example, let N=50 so for this we will have 3 pairs (1,1),(4,8),(9,27)
basically, we have to find pairs where the cube of one number is the same as the square of the other number in a given pair
the constraint is
1<=N<10^6
1<=i,j<N
Naive approach use 2 for loops iterate through each element and get those pairs where cube is equal to sum time complexity is O(n*2)
def get_pair(N):
for i in range(1,N):
for j in range(1,N):
if i*i*i==j*j:
print(i,j)
N=50
get_pair(N)
what will be an optimal way to solve this problem with a better time complexity?
Since you're working with integers, if there exists some number M = i^3 = j^2 for i and j between 1 and N, then that means there exists a k such that M = k^6. To find i and j, simply compare the representations of M:
(1) M = k^6 = i^3 = (k^2)^3 therefore i = k^2
(2) M = k^6 = j^2 = (k^3)^2 therefore j = k^3
Since j is always greater than or equal to i, you only need to check if 1 < k^3 < N. In other words, k should be less than the cube root of N.
k
M = k^6
i = k^2
j = k^3
2
64
4
8
3
729
9
27
4
4,096
16
64
5
15,625
25
125
6
46,656
36
216
...
...
...
...
97
8.329x10^11
9409
912,673
98
8.858x10^11
9604
941,192
99
9.415x10^11
9801
970,299
Note that 100 isn't a valid candidate for k because that would make j less than or equal to N instead of strictly less than N (if we're going with N = 10^6).
So to get the list of tuples that satisfy your problem, find the values of k such that 1 < k^3 < N and return its square and cube in a tuple.
import math
from typing import List, Tuple
N: int = 10**6
pairs: List[Tuple[int, int]] = [(k * k, k * k * k) for k in range(2, math.ceil(N**(1 / 3)))]
print(pairs)
This is a list comprehension, a shorthand for a for-loop.
I'm basically asking Python to generate a list of tuples over an index k that falls in the range defined as range(2, math.ceil(N**(1 / 3)). That range is exactly the first column of the table above.
Then, for every k in that range I make a tuple of which the first item is k^2 and the second item is k^3, just like the last two columns of the table.
Also threw in the typing library in there for good measure. Clear code is good code, even for something as small as this. Python can figure out that pairs is a list of tuples without anyone telling it, but this way I've explicitly enforced that variable to be a list of tuples to avoid any confusion when someone tries to give it a different value or isn't sure what the variable contains.
Another naive approach could be to use the "basic" values ?
def get_pair(N):
for i in range(N):
if(i**3 > MAX):
break # Limit to the max you want, and i**3 > i**2 if i > 1
print(i**2, i**3)
Time complexity seems to be O(n) (Not an expert, so correct me if i'm wrong)
This is made so that the first element cubed == second element squared:
first = a^2
second = a^3
first^3 = a^(2*3) = a^6
second^2 = a^(3*2) = a^6
You can use itertool's combinations_with_replacement function.
from itertools import combinations_with_replacement as combinations
def get_pair(N):
for i, j in combinations(range(1,N), 2):
if i*i*i==j*j:
print(i,j)
N=50
get_pair(N)
You do this with one loop (and minimal iterations) if you know that that pairs (x, y) are always y = x * i, this means you can use:
def get_pair(N):
i = 1
a = 1
while a * i < N:
b = a * i
print(a,b)
i += 1
a = i**2
N=50
get_pair(N)
This gets all 3 pairs:
1 1
4 8
9 27
In only 3 total iterations.
Problem Description:
I'm working on making a function which gives me a definition for a particular combination of several descriptors based on a single index. My inputs are a set of raw features X = [feat0,feat1,feat2,feat3,feat4], a list of powers to be used pow = [1,2,3], and a list of group sizes sizes = [1,3,5]. A valid output might look like the following:
feat0^2 * feat4^3 * feat1^1
This output is valid because feat0, feat4, and feat1 exist within X, their powers exist within pow, and the number of features being combined is in sizes.
Invalid edge cases include:
values which don't exist in X, powers not in pow, and combination sizes not in sizes
combinations that are identical to another are invalid: feat0^2 * feat1^3 and feat1^3 * feat0^2 are the same
combinations that include multiples of the same feature are invalid: feat0^1 * feat0^3 * feat2^2 is invalid
under the hood I'm encoding these groupings as lists of tuples. So feat0^2 * feat4^3 * feat1^1 would be represented as [(0,2), (4,3), (1,1)], where the first element in the tuple is the feature index, and the second is the power.
Question:
my question is, how can I create a 1 to 1 mapping of a particular combination to an index i? I would like to get the number of possible combinations, and be able to plug in an integer i to a function, and have that function generate a particular combination. Something like this:
X = [0.123, 0.111, 11, -5]
pow = [1,2,3]
sizes = [1,3]
#getting total number of combinations
numCombos = get_num_combos(X,pow,sizes)
#getting a random index corresponding to a grouping
i = random.randint(0, numCombos)
#getting grouping
grouping = generate_grouping(i, X, pow, sizes)
print(grouping)
Resulting in something like
[(0,1), (1,2), (3,1)]
So far, figuring out the generation when not accounting for the various edge cases wasn't too hard, but I'm at a loss for how to account for edge cases 2 and 3; making it guaranteed that no value of i is algebraically equivalent to any other value of i, and that the same feature does not appear multiple times in a grouping.
Current Progress
#computes the n choose k of a list and a size
def get_num_groupings(n, k):
return int(math.factorial(n)/(math.factorial(k)*math.factorial(n-k)))
import numpy as np
import bisect
i = 150
n = 5
m = 3
sizes = [1, 3, 5]
#computing the number of elements in each group length
numElements = [m**k * get_num_groupings(n, k) for k in sizes]
#index bins for each group size
bins = list(np.cumsum(numElements))[:-1]
#getting the current group size
binIdx = bisect.bisect_left(bins,i)
curSize = sizes[binIdx]
#adding idx 0 to bins
bins = [0]+bins
#getting the location of i in the bin
z = i - bins[binIdx]
#getting the product index and combination rank
pi = z // m**k
ci = z % m**k
#getting the indexes of the powers
pidx = [(pi // m**(curSize - (num+1)))%m for num in range(curSize)]
#getting the indexes of the features
#TODO cidx = unrank(i, range(n))
This is based on the Mad Physicist's answer. Though I haven't figured out how to get cidx yet. Some of the variable names are rewritten for my own understanding. To my knowledge this implimentation works by logically separating the combinations of variables and which powers they each have. So far, I can get the powers from an index i, and once unrank is ironed out I should be able to get the indexes for which features are used.
Let's look at a slightly different problem that's closely related to what to want: generate all the possible valid combinations.
If you choose a size and a power, finding all possible combinations of features is fairly straightforward:
from itertools import combinations, product
n = len(X)
m = len(powers)
k = size = ... # e.g. 3
pow = ... # e.g. [1, 2, 3]
The iterator of unique combinations of features is given by
def elements(X, size, pow):
for x in combinations(X, size):
yield sum(e**p for p, e in zip(pow, x))
The equivalent one-liner would he
(sum(e**p for p, e in zip(pow, x)) for x in combinations(X, size))
This generator has exactly n choose k unique elements. These elements meet all your conditions by definition.
Now you can loop over all possible sizes and product of powers to get all the options:
def all_features(X, sizes, powers):
for size in sizes:
for pow in product(powers, repeat=size):
for x in combinations(X, size):
yield sum(e**p for p, e in zip(pow, x))
The total number of elements is the sum for each k of m**k * n choose k.
Now that you've counted the possibilities, you can compute the mapping of element to index and vice versa, using a combinatorial number system. Sample ranking and unranking functions for combinations are shown here. You can use them after you adjust the index for the size and power bins.
To show what I mean, assume you have three functions (given in the linked answer):
choose(n, k) computes n choose k
rank(combo) accepts the ordered indices of a specific commination and returns the rank.
unrank(ind, k) accepts a rank and size, and returns the k indices of the corresponding combination.
You can then compute the offsets of each size group and the step for each power within that group. Let's work through your concrete example with n = 5, m = 3, and sizes = [1, 3, 5].
The number of elements for each size is given by
elements = [m**k * choose(n, k) for k in sizes]
The total number of possible arrangements is sum(elements):
3**1 * choose(5, 1) + 3**3 * choose(5, 3) + 3**5 * choose(5, 5) = 3 * 5 + 27 * 10 + 243 * 1 = 15 + 270 + 243 = 528
The cumulative sum is useful to convert between index and element:
cumsum = [0, 15, 285]
When you get an index, you can check which bin it falls in using bisect.
Let's say you were given index = 55. Since 15 < 55 < 285, your offset is 15, size = 3. Within the size = 3 group, you have an offset of z = 55 - 15 = 40.
Within the k = 3 group, there are m**k = 3**3 = 27 power products. The index of the product is pi = z // m**k and the combination rank is ci = z % m**k.
So the indices of the power are given by
pidx = [(pi // m**(k - 1)) % m, (pi // m**(k - 2)) % m, ...]
Similarly, the indices of the combination are given by
cidx = unrank(ci, k)
You can convert all these indices into a value using something like
sum(X[q]**powers[p] for p, q in zip(pidx, cidx))
The Problem:
You are given an array m of size n, where each value of m is composed of a weight w, and a percentage p.
m = [m0, m1, m2, ... , mn] = [[m0w, m0p], [m1w, m1p], [m2w, m2p], ..., [mnw, mnp] ]
So we'll represent this in python as a list of lists.
We are then trying to find the minimum value of this function:
# chaima is so fuzzy how come?
def minimize_me(m):
t = 0
w = 1
for i in range(len(m)):
current = m[i]
t += w * current[0]
w *= current[1]
return t
where the only thing we can change about m is its ordering. (i. e. rearrange the elements of m in any way) Additionally, this needs to complete in better than O(n!).
Brute Force Solution:
import itertools
import sys
min_t = sys.maxint
min_permutation = None
for permutation in itertools.permutations(m):
t = minimize_me(list(permutation), 0, 1)
if t < min_t:
min_t = t
min_permutation = list(permutation)
Ideas On How To Optimize:
the idea:
Instead of finding the best order, see if we can find a way to compare two given values in m, when we know the state of the problem. (The code might explain this more clearly). If I can build this using a bottom-up approach (so, starting from the end, assuming I have no optimal solution) and I can create an equation that can compare two values in m and say one is definitively better than the other, then I can construct an optimal solution, by using that new value, and comparing the next set of values of m.
the code:
import itertools
def compare_m(a, b, v):
a_first = b[0] + b[1] * (a[0] + a[1] * v)
b_first = a[0] + a[1] * (b[0] + b[1] * v)
if a_first > b_first:
return a, a_first
else:
return b, b_first
best_ordering = []
v = 0
while len(m) > 1:
best_pair_t = sys.maxint
best_m = None
for pair in itertools.combinations(m, 2):
m, pair_t = compare_m(pair[0], pair[1], v)
if pair_t < best_pair_t:
best_pair_t = pair_t
best_m = m
best_ordering.append(best_m)
m.remove(best_m)
v = best_m[0] + best_m[1] * v
first = m[0]
best_ordering.append(first)
However, this is not working as intended. The first value is always right, and roughly 60-75% of the time, the entire solution is optimal. However, in some cases, it looks like the way I am changing the value v which then gets passed back into my compare is evaluating much higher than it should. Here's the script I'm using to test against:
import random
m = []
for i in range(0, 5):
w = random.randint(1, 1023)
p = random.uniform(0.01, 0.99)
m.append([w, p])
Here's a particular test case demonstrating the error:
m = [[493, 0.7181996086105675], [971, 0.19915848527349228], [736, 0.5184210526315789], [591, 0.5904761904761905], [467, 0.6161290322580645]]
optimal solution (just the indices) = [1, 4, 3, 2, 0]
my solution (just the indices) = [4, 3, 1, 2, 0]
It feels very close, but I cannot for the life of me figure out what is wrong. Am I looking at this the wrong way? Does this seem like it's on the right track? Any help or feedback would be greatly appreciated!
We don't need any information about the current state of the algorithm to decide which elements of m are better. We can just sort the values using the following key:
def key(x):
w, p = x
return w/(1-p)
m.sort(key=key)
This requires explanation.
Suppose (w1, p1) is directly before (w2, p2) in the array. Then after processing these two items, t will be increased by an increment of w * (w1 + p1*w2) and w will be multiplied by a factor of p1*p2. If we switch the order of these items, t will be increased by an increment of w * (w2 + p2*w1) and w will be multiplied by a factor of p1*p2. Clearly, we should perform the switch if (w1 + p1*w2) > (w2 + p2*w1), or equivalently after a little algebra, if w1/(1-p1) > w2/(1-p2). If w1/(1-p1) <= w2/(1-p2), we can say that these two elements of m are "correctly" ordered.
In the optimal ordering of m, there will be no pair of adjacent items worth switching; for any adjacent pair of (w1, p1) and (w2, p2), we will have w1/(1-p1) <= w2/(1-p2). Since the relation of having w1/(1-p1) <= w2/(1-p2) is the natural total ordering on the w/(1-p) values, the fact that w1/(1-p1) <= w2/(1-p2) holds for any pair of adjacent items means that the list is sorted by the w/(1-p) values.
Your attempted solution fails because it only considers what a pair of elements would do to the value of the tail of the array. It doesn't consider the fact that rather than using a low-p element now, to minimize the value of the tail, it might be better to save it for later, so you can apply that multiplier to more elements of m.
Note that the proof of our algorithm's validity relies on all p values being at least 0 and strictly less than 1. If p is 1, we can't divide by 1-p, and if p is greater than 1, dividing by 1-p reverses the direction of the inequality. These problems can be resolved using a comparator or a more sophisticated sort key. If p is less than 0, then w can switch sign, which reverses the logic of what items should be switched. Then we do need to know about the current state of the algorithm to decide which elements are better, and I'm not sure what to do then.
I'm a stumped on how to speed up my algorithm which sums multiples in a given range. This is for a problem on codewars.com here is a link to the problem
codewars link
Here's the code and i'll explain what's going on in the bottom
import itertools
def solution(number):
return multiples(3, number) + multiples(5, number) - multiples(15, number)
def multiples(m, count):
l = 0
for i in itertools.count(m, m):
if i < count:
l += i
else:
break
return l
print solution(50000000) #takes 41.8 seconds
#one of the testers takes 50000000000000000000000000000000000000000 as input
# def multiples(m, count):
# l = 0
# for i in xrange(m,count ,m):
# l += i
# return l
so basically the problem ask the user return the sum of all the multiples of 3 and 5 within a number. Here are the testers.
test.assert_equals(solution(10), 23)
test.assert_equals(solution(20), 78)
test.assert_equals(solution(100), 2318)
test.assert_equals(solution(200), 9168)
test.assert_equals(solution(1000), 233168)
test.assert_equals(solution(10000), 23331668)
my program has no problem getting the right answer. The problem arises when the input is large. When pass in a number like 50000000 it takes over 40 seconds to return the answer. One of the inputs i'm asked to take is 50000000000000000000000000000000000000000, which a is huge number. That's also the reason why i'm using itertools.count() I tried using xrange in my first attempt but range can't handle numbers larger than a c type long. I know the slowest part the problem is the multiples method...yet it is still faster then my first attempt using list comprehension and checking whether i % 3 == 0 or i % 5 == 0, any ideas guys?
This solution should be faster for large numbers.
def solution(number):
number -= 1
a, b, c = number // 3, number // 5, number // 15
asum, bsum, csum = a*(a+1) // 2, b*(b+1) // 2, c*(c+1) // 2
return 3*asum + 5*bsum - 15*csum
Explanation:
Take any sequence from 1 to n:
1, 2, 3, 4, ..., n
And it's sum will always be given by the formula n(n+1)/2. This can be proven easily if you consider that the expression (1 + n) / 2 is just a shortcut for computing the average, or Arithmetic mean of this particular sequence of numbers. Because average(S) = sum(S) / length(S), if you take the average of any sequence of numbers and multiply it by the length of the sequence, you get the sum of the sequence.
If we're given a number n, and we want the sum of the multiples of some given k up to n, including n, we want to find the summation:
k + 2k + 3k + 4k + ... xk
where xk is the highest multiple of k that is less than or equal to n. Now notice that this summation can be factored into:
k(1 + 2 + 3 + 4 + ... + x)
We are given k already, so now all we need to find is x. If x is defined to be the highest number you can multiply k by to get a natural number less than or equal to n, then we can get the number x by using Python's integer division:
n // k == x
Once we find x, we can find the sum of the multiples of any given k up to a given n using previous formulas:
k(x(x+1)/2)
Our three given k's are 3, 5, and 15.
We find our x's in this line:
a, b, c = number // 3, number // 5, number // 15
Compute the summations of their multiples up to n in this line:
asum, bsum, csum = a*(a+1) // 2, b*(b+1) // 2, c*(c+1) // 2
And finally, multiply their summations by k in this line:
return 3*asum + 5*bsum - 15*csum
And we have our answer!
recently I became interested in the subset-sum problem which is finding a zero-sum subset in a superset. I found some solutions on SO, in addition, I came across a particular solution which uses the dynamic programming approach. I translated his solution in python based on his qualitative descriptions. I'm trying to optimize this for larger lists which eats up a lot of my memory. Can someone recommend optimizations or other techniques to solve this particular problem? Here's my attempt in python:
import random
from time import time
from itertools import product
time0 = time()
# create a zero matrix of size a (row), b(col)
def create_zero_matrix(a,b):
return [[0]*b for x in xrange(a)]
# generate a list of size num with random integers with an upper and lower bound
def random_ints(num, lower=-1000, upper=1000):
return [random.randrange(lower,upper+1) for i in range(num)]
# split a list up into N and P where N be the sum of the negative values and P the sum of the positive values.
# 0 does not count because of additive identity
def split_sum(A):
N_list = []
P_list = []
for x in A:
if x < 0:
N_list.append(x)
elif x > 0:
P_list.append(x)
return [sum(N_list), sum(P_list)]
# since the column indexes are in the range from 0 to P - N
# we would like to retrieve them based on the index in the range N to P
# n := row, m := col
def get_element(table, n, m, N):
if n < 0:
return 0
try:
return table[n][m - N]
except:
return 0
# same definition as above
def set_element(table, n, m, N, value):
table[n][m - N] = value
# input array
#A = [1, -3, 2, 4]
A = random_ints(200)
[N, P] = split_sum(A)
# create a zero matrix of size m (row) by n (col)
#
# m := the number of elements in A
# n := P - N + 1 (by definition N <= s <= P)
#
# each element in the matrix will be a value of either 0 (false) or 1 (true)
m = len(A)
n = P - N + 1;
table = create_zero_matrix(m, n)
# set first element in index (0, A[0]) to be true
# Definition: Q(1,s) := (x1 == s). Note that index starts at 0 instead of 1.
set_element(table, 0, A[0], N, 1)
# iterate through each table element
#for i in xrange(1, m): #row
# for s in xrange(N, P + 1): #col
for i, s in product(xrange(1, m), xrange(N, P + 1)):
if get_element(table, i - 1, s, N) or A[i] == s or get_element(table, i - 1, s - A[i], N):
#set_element(table, i, s, N, 1)
table[i][s - N] = 1
# find zero-sum subset solution
s = 0
solution = []
for i in reversed(xrange(0, m)):
if get_element(table, i - 1, s, N) == 0 and get_element(table, i, s, N) == 1:
s = s - A[i]
solution.append(A[i])
print "Solution: ",solution
time1 = time()
print "Time execution: ", time1 - time0
I'm not quite sure if your solution is exact or a PTA (poly-time approximation).
But, as someone pointed out, this problem is indeed NP-Complete.
Meaning, every known (exact) algorithm has an exponential time behavior on the size of the input.
Meaning, if you can process 1 operation in .01 nanosecond then, for a list of 59 elements it'll take:
2^59 ops --> 2^59 seconds --> 2^26 years --> 1 year
-------------- ---------------
10.000.000.000 3600 x 24 x 365
You can find heuristics, which give you just a CHANCE of finding an exact solution in polynomial time.
On the other side, if you restrict the problem (to another) using bounds for the values of the numbers in the set, then the problem complexity reduces to polynomial time. But even then the memory space consumed will be a polynomial of VERY High Order.
The memory consumed will be much larger than the few gigabytes you have in memory.
And even much larger than the few tera-bytes on your hard drive.
( That's for small values of the bound for the value of the elements in the set )
May be this is the case of your Dynamic programing algorithm.
It seemed to me that you were using a bound of 1000 when building your initialization matrix.
You can try a smaller bound. That is... if your input is consistently consist of small values.
Good Luck!
Someone on Hacker News came up with the following solution to the problem, which I quite liked. It just happens to be in python :):
def subset_summing_to_zero (activities):
subsets = {0: []}
for (activity, cost) in activities.iteritems():
old_subsets = subsets
subsets = {}
for (prev_sum, subset) in old_subsets.iteritems():
subsets[prev_sum] = subset
new_sum = prev_sum + cost
new_subset = subset + [activity]
if 0 == new_sum:
new_subset.sort()
return new_subset
else:
subsets[new_sum] = new_subset
return []
I spent a few minutes with it and it worked very well.
An interesting article on optimizing python code is available here. Basically the main result is that you should inline your frequent loops, so in your case this would mean instead of calling get_element twice per loop, put the actual code of that function inside the loop in order to avoid the function call overhead.
Hope that helps! Cheers
, 1st eye catch
def split_sum(A):
N_list = 0
P_list = 0
for x in A:
if x < 0:
N_list+=x
elif x > 0:
P_list+=x
return [N_list, P_list]
Some advices:
Try to use 1D list and use bitarray to reduce memory footprint at minimum (http://pypi.python.org/pypi/bitarray) so you will just change get / set functon. This should reduce your memory footprint by at lest 64 (integer in list is pointer to integer whit type so it can be factor 3*32)
Avoid using try - catch, but figure out proper ranges at beginning, you might found out that you will gain huge speed.
The following code works for Python 3.3+ , I have used the itertools module in Python that has some great methods to use.
from itertools import chain, combinations
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
nums = input("Enter the Elements").strip().split()
inputSum = int(input("Enter the Sum You want"))
for i, combo in enumerate(powerset(nums), 1):
sum = 0
for num in combo:
sum += int(num)
if sum == inputSum:
print(combo)
The Input Output is as Follows:
Enter the Elements 1 2 3 4
Enter the Sum You want 5
('1', '4')
('2', '3')
Just change the values in your set w and correspondingly make an array x as big as the len of w then pass the last value in the subsetsum function as the sum for which u want subsets and you wl bw done (if u want to check by giving your own values).
def subsetsum(cs,k,r,x,w,d):
x[k]=1
if(cs+w[k]==d):
for i in range(0,k+1):
if x[i]==1:
print (w[i],end=" ")
print()
elif cs+w[k]+w[k+1]<=d :
subsetsum(cs+w[k],k+1,r-w[k],x,w,d)
if((cs +r-w[k]>=d) and (cs+w[k]<=d)) :
x[k]=0
subsetsum(cs,k+1,r-w[k],x,w,d)
#driver for the above code
w=[2,3,4,5,0]
x=[0,0,0,0,0]
subsetsum(0,0,sum(w),x,w,7)