One of the samples for the Google or-tools is a solver for the n-queens problem. At the bottom it says that the implementation can be improved by adding symmetry breaking constraints to the constraint solver.
Looking around the internet, I found the symmetry breaking constraints for the n-queens problem, but I cannot for the life of me figure out how to convert those to constraints to python code that implements them.
EDIT: this was a bad question, let's update...
What have I tried?
Here is the setup from the first link above:
from ortools.constraint_solver import pywrapcp
N = 8
solver = pywrapcp.Solver("n-queens")
# Creates the variables.
# The array index is the column, and the value is the row.
queens = [solver.IntVar(0, N - 1, "x%i" % i) for i in range(N)]
# Creates the constraints.
# All rows must be different.
solver.Add(solver.AllDifferent(queens))
# All columns must be different because the indices of queens are all different.
# No two queens can be on the same diagonal.
solver.Add(solver.AllDifferent([queens[i] + i for i in range(N)]))
solver.Add(solver.AllDifferent([queens[i] - i for i in range(N)]))
# TODO: add symmetry breaking constraints
db = solver.Phase(queens, solver.CHOOSE_FIRST_UNBOUND, solver.ASSIGN_MIN_VALUE)
solver.NewSearch(db)
num_solutions = 0
while solver.NextSolution():
num_solutions += 1
solver.EndSearch()
print()
print("Solutions found:", num_solutions)
print("Time:", solver.WallTime(), "ms")
I know I can implement simple constraints successfully. If I wanted to ensure the solution always has a queen in the first column on the first row, I could implement that like this:
solver.Add(queens[0] == 0)
The queens[0] variable represents the queens location in the first column and this constraint is only satisfied when the first column has a queen in the first row. This of course is not what I want to do however because it's possible that a solution does not include any corner cells.
The symmetry breaking constraints for the n-queens problem are found below. They are pulled directly from the link in the second paragraph.
I understand how these constraints work. The idea is that you can apply this function to each cell on the n-queens board in order to transform the state into an equivalent state. One of these states will be the canonical representation of that state. This is used as a method to prune future processing by eliminating duplicate evaluations.
If I were just implementing this in an after the fact way, I would do exactly what I describe above, convert the state using each possible symmetry breaking function, calculate some sort of state hash (e.g. a string of the selected row in each column) and select the one that's the lowest for each proposed solution. Skip future processing on ones we have seen before.
My problem is that I don't know how to convert these transformations into constraints for the google or-tools constraint programming solver.
Let's take a look at the simplest one, d1(r[i] = j) => r[j] = i, reflection about the main diagonal. What I know is that the transformation needs to be applied to all cells, then compared against the current state in order to prevent one from being expanded. I don't understand enough about python to understand what kind of expression works here for the transformation, and I just can't figure out how to create the constraint that compares the transformation to the current state for this particular solver.
state = [queens[i].Value() for i in range(N)]
symX = [state[N - (i + 1)] for i in range(N)]
symY = [N - (state[i] + 1) for i in range(N)]
symD1 = [state.index(i) for i in range(N)]
symD2 = [N - (state.index(N-(i+1)) + 1) for i in range(N)]
symR90 = [N - (state.index(i) + 1) for i in range(N)]
symR180 = [N - (state[N-(i+1)] + 1) for i in range(N)]
symR270 = [state.index(N-(i+1)) for i in range(N)]
I tried to use a custom DecisionBuilder to prune the search tree using the symmetries as new constraints, but I couldn't make it work.
Instead I had to use a SearchMonitor that captures the event of every solution and check if that solution is a symmetry of a previous one.
Here i add the code of the SearchMonitor, the capture of the solution overriding the "AcceptSolution" function, and the gen_symetries function to calculate and check all possible symmetries.
class SearchMonitor(pywrapcp.SearchMonitor):
def __init__(self, solver, q):
pywrapcp.SearchMonitor.__init__(self, solver)
self.q = q
self.all_solutions = []
self.unique_solutions = []
self.n = len(self.q)
def AcceptSolution(self):
qval = [self.q[i].Value() for i in range(self.n)]
self.all_solutions.append(qval)
symmetries = [vv in self.unique_solutions for vv in gen_symmetries(self.n, qval)]
if sum(symmetries) == 0:
self.unique_solutions.append(qval)
return False
def gen_symmetries(n, solution):
symmetries = []
#x(r[i]=j) → r[n−i+1]=j
x = list(range(n))
for index in range(n):
x[n - 1 - index] = solution[index]
symmetries.append(x)
#y(r[i]=j) → r[i]=n−j+1
y = list(range(n))
for index in range(n):
y[index] = (n - 1 - solution[index])
symmetries.append(y)
#d1(r[i]=j) → r[j]=i
d1 = list(range(n))
for index in range(n):
d1[solution[index]] = index
symmetries.append(d1)
# d2(r[i]=j) → r[n−j+1]=n−i+1
d2 = list(range(n))
for index in range(n):
d2[n - 1 - solution[index]] = (n - 1 - index)
symmetries.append(d2)
# r90(r[i]=j) → r[j] = n−i+1
r90 = list(range(n))
for index in range(n):
r90[solution[index]] = (n - 1 - index)
symmetries.append(r90)
# r180(r[i]=j) → r[n−i+1]=n−j+1
r180 = list(range(n))
for index in range(n):
r180[n - 1 - index] = (n - 1 - solution[index])
symmetries.append(r180)
# r270(r[i]=j) → r[n−j+1]=i
r270 = list(range(n))
for index in range(n):
r270[n - 1 - solution[index]] = index
symmetries.append(r270)
return symmetries
Later you just have to add the monitor to your solver like this.
monitor = SearchMonitor(solver, queens)
solver.Solve(db, monitor)
solver.NewSearch(db)
And finally just printing all the unique solutions
print("Unique Solutions:", len(monitor.unique_solutions), monitor.unique_solutions)
You can see the full working example in the gist.
https://gist.github.com/carlgira/7a4e6cf0f7b7412762171015917bccb4
You must use the known symmetry relations between the sought solutions to identify constraints that will eliminate equivalent solutions.
For every solution with queens[0] >= N/2, then there is another, vertically mirrored, solution with queens[0] <= N/2. Therefore, we can seek for the solution with the smaller value of queens[0] and add the following constraint:
solver.Add(queens[0] < (N+1)//2) # Handle both even and odd values of N
Then, a solution satisfying the condition queens[0] < queens[N-1] has an equivalent, horizontally-mirrored, solution satisfying queens[0] > queens[N-1]. You can tell the solver to look for only those solutions, where the queen in the leftmost column is below the queen in the rightmost column:
solver.Add(queens[0] < queens[N-1])
I couldn't easily formulate a constraint reflecting the rotational symmetry, but I believe that it is possible.
Related
Could someone please help me out with the following?
I have one dataframe with two columns: products and webshops (n x 2) with n products. Now I would like to obtain a binary (n x n) matrix with all products listed as the indices and all products listed as the column names. Then each cell should contain a 1 or 0 denoting whether the product in the index and column name came from the same webshop.
The following code is returning what I would like to achieve.
dist = np.empty((len(df_title), len(df_title)), int)
for i in range(0,len(df_title)):
for j in range(0,len(df_title)):
boolean = df_title.values[i][1] == df_title.values[j][1]
dist[i][j] = boolean
df = pd.DataFrame(dist)
However, this code takes quite a significant time already for n = 1624. Therefore I was wondering if someone would have an idea for a faster algorithm.
Thanks!
It seems like you're only interested in the element at position 1 for every column anyways, so creating a temp-variable for easier lookup could help:
lookup = df_title.values[:, 1]
Also since you want to interpret the resulting matrix as bool-matrix, you should probably specify dtype=bool (1 byte per field) instead of dtype=int (8 bytes per field), which also cuts down memory consumption by 8.
dist = np.empty((len(df_title), len(df_title)), dtype=bool)
Your matrix will be symmetric along the diagonal anyways, so you only need to compute "half" of the matrix, also if i == j we know the corresponding field in the matrix should be True.
lookup = df_title.values[:, 1]
dist = np.empty((len(df_title), len(df_title)), dtype=bool)
for i in range(len(df_title)):
for j in range(len(df_title)):
if i == j:
# diagonal
dist[i, j] = True
else:
# symmetric along diagonal
dist[i, j] = dist[j, i] = lookup[i] == lookup[j]
Also using numpy-broadcasting you could actually transform all of that into a single line of code, that is orders of magnitude faster than the double-for-loop solution:
lookup = df_title.values[:, 1]
dist = lookup[None, :] == lookup[:, None]
Given a list of 20 float numbers, I want to find a largest subset where any two of the candidates are different from each other larger than a mindiff = 1.. Right now I am using a brute-force method to search from largest to smallest subsets using itertools.combinations. As shown below, the code finds a subset after 4 s for a list of 20 numbers.
from itertools import combinations
import random
from time import time
mindiff = 1.
length = 20
random.seed(99)
lst = [random.uniform(1., 10.) for _ in range(length)]
t0 = time()
n = len(lst)
sample = []
found = False
while not found:
# get all subsets with size n
subsets = list(combinations(lst, n))
# shuffle to ensure randomness
random.shuffle(subsets)
for subset in subsets:
# sort the subset numbers
ss = sorted(subset)
# calculate the differences between every two adjacent numbers
diffs = [j-i for i, j in zip(ss[:-1], ss[1:])]
if min(diffs) > mindiff:
sample = set(subset)
found = True
break
# check subsets with size -1
n -= 1
print(sample)
print(time()-t0)
Output:
{2.3704888087015568, 4.365818049020534, 5.403474619948962, 6.518944556233767, 7.8388969285727015, 9.117993839791751}
4.182451486587524
However, in reality I have a list of 200 numbers, which is infeasible for a brute-froce enumeration. I want a fast algorithm to sample just one random largest subset with a minimum difference larger than 1. Note that I want each sample has randomness and maximum size. Any suggestions?
My previous answer assumed you simply wanted a single optimal solution, not a uniform random sample of all solutions. This answer assumes you want one that samples uniformly from all such optimal solutions.
Construct a directed acyclic graph G where there is one node for each point, and nodes a and b are connected when b - a > mindist. Also add two virtual nodes, s and t, where s -> x for all x and x -> t for all x.
Calculate for each node in G how many paths of length k exist to t. You can do this efficiently in O(n^2 k) time using dynamic programming with a table P[x][k], filling initially P[x][0] = 0 except P[t][0] = 1, and then P[x][k] = sum(P[y][k-1] for y in neighbors(x)).
Keep doing this until you reach the maximum k - you now know the size of the optimal subset.
Uniformly sample a path of length k from s to t using P to weight your choices.
This is done by starting at s. We then look at each neighbor of s and choose one randomly with a weighting dictated by P[s][k]. This gives us our first element of the optimal set.
We then repeatedly perform this step. We are at x, look at the neighbors of x and pick one randomly using weights P[x][k-i] where i is the step we're at.
Use the nodes you sampled in 3 as your random subset.
An implementation of the above in pure Python:
import random
def sample_mindist_subset(xs, mindist):
# Construct directed graph G.
n = len(xs)
s = n; t = n + 1 # Two virtual nodes, source and sink.
neighbors = {
i: [t] + [j for j in range(n) if xs[j] - xs[i] > mindist]
for i in range(n)}
neighbors[s] = [t] + list(range(n))
neighbors[t] = []
# Compute number of paths P[x][k] from x to t of length k.
P = [[0 for _ in range(n+2)] for _ in range(n+2)]
P[t][0] = 1
for k in range(1, n+2):
for x in range(n+2):
P[x][k] = sum(P[y][k-1] for y in neighbors[x])
# Sample maximum length path uniformly at random.
maxk = max(k for k in range(n+2) if P[s][k] > 0)
path = [s]
while path[-1] != t:
candidates = neighbors[path[-1]]
weights = [P[cn][maxk-len(path)] for cn in candidates]
path.append(random.choices(candidates, weights)[0])
return [xs[i] for i in path[1:-1]]
Note that if you want to sample from the same set of numbers many times, you don't have to recompute P every single time and can re-use it.
I probably don't fully understand the question, because right now the solution is quite trivial. EDIT: yes, I misunderstood after all, the OP does not just want an optimal solution, but wishes to randomly sample from the set of optimal solutions. This answer is not incorrect but it also is an answer to a different question than what OP is interested in.
Simply sort the numbers and greedily construct the subset:
def mindist_subset(xs, mindist):
result = []
for x in sorted(xs):
if not result or x - result[-1] > mindist:
result.append(x)
return result
Sketch of proof of correctness.
Suppose we have a solution S given input array A that is of optimal size. If it does not contain min(A) note that we could remove min(S) from S and add min(A) since this would only increase the distance between min(S) and the second smallest number in S. Conclusion: we can without loss of generality assume that min(A) is part of an optimal solution.
Now we can apply this argument recursively. We add min(A) to a solution and remove all elements too close to min(A), giving remaining elements A'. Then we're left with a subproblem where exactly the same argument applies, we can choose min(A') as our next element of the solution, etc.
I am trying to write a code where i have a list of vectors and Ι have to find the angle between every vector and the rest of them.(I am working on mediapipe's hand landmarks).
My code so far is this one:
vectors = [thumb_cmc_vec, thumb_mcp_vec, thumb_ip_vec, thumb_tip_vec, index_mcp_vec, index_pip_vec,
index_dip_vec, index_tip_vec, middle_mcp_vec, middle_pip_vec, middle_dip_vec, middle_tip_vec,
ring_mcp_vec, ring_pip_vec, ring_dip_vec, ring_tip_vec, pinky_mcp_vec, pinky_pip_vec,
pinky_dip_vec, pinky_tip_vec]
for vector in vectors:
next_vector = vector + 1
print(vector)
for next_vector in vectors:
print(next_vector)
M = (np.linalg.norm(vector) * np.linalg.norm(next_vector))
ES = np.dot(vector, next_vector)
th = math.acos(ES / M)
list.append(th)
print(list)
where M = the multiplication of the norms of the current sets of vectors, ES = the
scalar product of the vectors and th = the angle of the vectors.
My problem is that the variable next_vector always starts the for loop from the first vector of the list even though I want it to start from the previous loop's next vector in order not to have duplicate results. Also when both of the loops are on the 3rd vector (thumb_ip_vec) I am getting this error
th = math.acos(ES / M)
ValueError: math domain error . Is there any way to solve this? Thank you!
I think you can iterate through the list indices (using range(len(vectors) - 1)) and access the elements through their indices instead of looping through each element
for i in range(len(vectors) - 1):
# Iterate from 0 to len(vectors) -1
vector = vectors[i]
for j in range(i + 1, len(vectors)):
# Iterate from index i + 1 to len(vectors)
next_vector = vectors[j]
M = (np.linalg.norm(vector) * np.linalg.norm(next_vector))
ES = np.dot(vector, next_vector)
th = math.acos(ES / M)
list.append(th)
print(list)
The efficient solution here is to iterate over combinations of vectors:
from itertools import combinations # At top of file
for vector, next_vector in itertools.combinations(vectors, 2):
M = (np.linalg.norm(vector) * np.linalg.norm(next_vector))
ES = np.dot(vector, next_vector)
th = math.acos(ES / M)
list.append(th)
It's significantly faster than looping over indices and indexing, reduces the level of loop nesting, and makes it more clear what you're trying to do (working with every unique pairing of the input).
I'm not sure I understand your question, but consider using ranges instead.
Ranges allow you to iterate, but without calling the exact value only, but by calling it's address.
Which means you can manipulate that index to access neighboring values.
for i in range(len(iterables)-1):
ii = i+1
initial_value = iterables[i]
next_value = iterables[ii]
for ii in range(len(iterables)):
# do_rest_of_code
Sort of like the mailman, you can reach someone's neighbor without knowing the neighbor's address.
The structure above generally works, but you will need to tweak it to meet your needs.
I have encountered the edit distance (Levenshtein distance) problem. I have looked at other similar stackoverflow questions, and is certain that my question is distinct from them - either from the language used or the approach.
I have used a 2D array that compares the two strings, and dynamic programming to store previous values. If i and j in the string indices match, it would output 0, as we don't need to do anything; else, the output is 1. It is as shown in the picture below, the orange arrow represents a match.
(Code below is edited after suggestions from answers)
def edit_distance(source, target):
n = len(target)+1
m = len(source)+1
# let D denote the 2-dimensional array, m is the column, n is row
D = [ [0]*m for _ in range(n)]
# the loop inside is the target string, we operate this
# while the loop outside is the source string
for j in range(0, m):
for i in range(0, n):
if target[i-1] == source[j-1]:
# match, insertion and deletion, find the path with least move
D[i][j] = min(D[i-1][j-1], D[i-1][j]+1, D[i][j-1]+1)
else:
# mismatch, insertion and deletion, find the path with least move
D[i][j] = min(D[i-1][j-1]+1, D[i-1][j]+1, D[i][j-1]+1)
return D[n-1][m-1]
print(edit_distance("distance", "editing"))
However, the final output was 8 in my code, while the optimal editing distance between the strings "editing" and "distance" should be 5, and I am very confused. Could you please help with it from the approach of dynamic programming?
You have 2 mistakes.
First is intialization. You fill everything with 0s, but then when you want to fill in D[1][m] you look in the cell above (where it should be m) and you find a 0. Make sure the borders are correctly filled in.
Second your iterations are off. range(1, n) over 'editing' will give you 'diting'. To fix it N and M by 1 (n=len(target) + 1) and in your comparison use target[i-1] == source[j-1].
Ah, looks like I have found a solution, that I'll have to answer my own question now. (I'm still confused with some parts, and am only answering this question to briefly introduce the new implementation, as to save the time of other kind helpers)
So firstly, I have missed a condition in the original code, that is, what if one of the two string inputs are empty? Then we'll have to insert everything from the other string. Henceforth, the optimal editing distance is just the length of this other string.
if i == 0:
D[i][j] = j
elif j == 0:
D[i][j] = i
Also, regarding the original for-loop of the code, I learnt my mistakes from GeeksforGeeks. If my understanding is correct, they are saying that if two indices (i and j) are consistent, all we need to do is to move diagonally upward on the graph (i-1 and j-1) without adding any counts.
Else if the indices do not match, we move either to the direction of i-1, j-1 or diagonally up dependently. I was right on this, apart from the fact the count is added after the move, whereas I have added them during the move.
I am still a bit unsure with how it worked, however I'll compare the two algorithms below, would be appreciated if someone could explain it further in the comments.
My original for-loop (present in the question)
for j in range(0, m):
for i in range(0, n):
if target[i-1] == source[j-1]:
D[i][j] = min(D[i-1][j-1], D[i-1][j]+1, D[i][j-1]+1)
else:
D[i][j] = min(D[i-1][j-1]+1, D[i-1][j]+1, D[i][j-1]+1)
And below is the new for-loop, whose output is correct after testing:
if target[i-1] == source[j-1]:
D[i][j] = D[i-1][j-1]
else:
D[i][j] = 1 + min(D[i][j-1], D[i-1][j], D[i-1][j-1])
Would be appreciated if someone could further explain how did this work, as I still only have a superfacial understanding of the new code
Final code:
def edit_distance(target, source):
m = len(target)+1
n = len(source)+1
D = [[0 for x in range(n)] for x in range(m)]
for i in range(m):
for j in range(n):
if i == 0:
D[i][j] = j
elif j == 0:
D[i][j] = i
elif target[i-1] == source[j-1]:
D[i][j] = D[i-1][j-1]
else:
D[i][j] = 1 + min(D[i][j-1], D[i-1][j], D[i-1][j-1])
return D[m-1][n-1]
print(edit_distance("distance", "editing"))
# output = 5, which is correct
recently I became interested in the subset-sum problem which is finding a zero-sum subset in a superset. I found some solutions on SO, in addition, I came across a particular solution which uses the dynamic programming approach. I translated his solution in python based on his qualitative descriptions. I'm trying to optimize this for larger lists which eats up a lot of my memory. Can someone recommend optimizations or other techniques to solve this particular problem? Here's my attempt in python:
import random
from time import time
from itertools import product
time0 = time()
# create a zero matrix of size a (row), b(col)
def create_zero_matrix(a,b):
return [[0]*b for x in xrange(a)]
# generate a list of size num with random integers with an upper and lower bound
def random_ints(num, lower=-1000, upper=1000):
return [random.randrange(lower,upper+1) for i in range(num)]
# split a list up into N and P where N be the sum of the negative values and P the sum of the positive values.
# 0 does not count because of additive identity
def split_sum(A):
N_list = []
P_list = []
for x in A:
if x < 0:
N_list.append(x)
elif x > 0:
P_list.append(x)
return [sum(N_list), sum(P_list)]
# since the column indexes are in the range from 0 to P - N
# we would like to retrieve them based on the index in the range N to P
# n := row, m := col
def get_element(table, n, m, N):
if n < 0:
return 0
try:
return table[n][m - N]
except:
return 0
# same definition as above
def set_element(table, n, m, N, value):
table[n][m - N] = value
# input array
#A = [1, -3, 2, 4]
A = random_ints(200)
[N, P] = split_sum(A)
# create a zero matrix of size m (row) by n (col)
#
# m := the number of elements in A
# n := P - N + 1 (by definition N <= s <= P)
#
# each element in the matrix will be a value of either 0 (false) or 1 (true)
m = len(A)
n = P - N + 1;
table = create_zero_matrix(m, n)
# set first element in index (0, A[0]) to be true
# Definition: Q(1,s) := (x1 == s). Note that index starts at 0 instead of 1.
set_element(table, 0, A[0], N, 1)
# iterate through each table element
#for i in xrange(1, m): #row
# for s in xrange(N, P + 1): #col
for i, s in product(xrange(1, m), xrange(N, P + 1)):
if get_element(table, i - 1, s, N) or A[i] == s or get_element(table, i - 1, s - A[i], N):
#set_element(table, i, s, N, 1)
table[i][s - N] = 1
# find zero-sum subset solution
s = 0
solution = []
for i in reversed(xrange(0, m)):
if get_element(table, i - 1, s, N) == 0 and get_element(table, i, s, N) == 1:
s = s - A[i]
solution.append(A[i])
print "Solution: ",solution
time1 = time()
print "Time execution: ", time1 - time0
I'm not quite sure if your solution is exact or a PTA (poly-time approximation).
But, as someone pointed out, this problem is indeed NP-Complete.
Meaning, every known (exact) algorithm has an exponential time behavior on the size of the input.
Meaning, if you can process 1 operation in .01 nanosecond then, for a list of 59 elements it'll take:
2^59 ops --> 2^59 seconds --> 2^26 years --> 1 year
-------------- ---------------
10.000.000.000 3600 x 24 x 365
You can find heuristics, which give you just a CHANCE of finding an exact solution in polynomial time.
On the other side, if you restrict the problem (to another) using bounds for the values of the numbers in the set, then the problem complexity reduces to polynomial time. But even then the memory space consumed will be a polynomial of VERY High Order.
The memory consumed will be much larger than the few gigabytes you have in memory.
And even much larger than the few tera-bytes on your hard drive.
( That's for small values of the bound for the value of the elements in the set )
May be this is the case of your Dynamic programing algorithm.
It seemed to me that you were using a bound of 1000 when building your initialization matrix.
You can try a smaller bound. That is... if your input is consistently consist of small values.
Good Luck!
Someone on Hacker News came up with the following solution to the problem, which I quite liked. It just happens to be in python :):
def subset_summing_to_zero (activities):
subsets = {0: []}
for (activity, cost) in activities.iteritems():
old_subsets = subsets
subsets = {}
for (prev_sum, subset) in old_subsets.iteritems():
subsets[prev_sum] = subset
new_sum = prev_sum + cost
new_subset = subset + [activity]
if 0 == new_sum:
new_subset.sort()
return new_subset
else:
subsets[new_sum] = new_subset
return []
I spent a few minutes with it and it worked very well.
An interesting article on optimizing python code is available here. Basically the main result is that you should inline your frequent loops, so in your case this would mean instead of calling get_element twice per loop, put the actual code of that function inside the loop in order to avoid the function call overhead.
Hope that helps! Cheers
, 1st eye catch
def split_sum(A):
N_list = 0
P_list = 0
for x in A:
if x < 0:
N_list+=x
elif x > 0:
P_list+=x
return [N_list, P_list]
Some advices:
Try to use 1D list and use bitarray to reduce memory footprint at minimum (http://pypi.python.org/pypi/bitarray) so you will just change get / set functon. This should reduce your memory footprint by at lest 64 (integer in list is pointer to integer whit type so it can be factor 3*32)
Avoid using try - catch, but figure out proper ranges at beginning, you might found out that you will gain huge speed.
The following code works for Python 3.3+ , I have used the itertools module in Python that has some great methods to use.
from itertools import chain, combinations
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
nums = input("Enter the Elements").strip().split()
inputSum = int(input("Enter the Sum You want"))
for i, combo in enumerate(powerset(nums), 1):
sum = 0
for num in combo:
sum += int(num)
if sum == inputSum:
print(combo)
The Input Output is as Follows:
Enter the Elements 1 2 3 4
Enter the Sum You want 5
('1', '4')
('2', '3')
Just change the values in your set w and correspondingly make an array x as big as the len of w then pass the last value in the subsetsum function as the sum for which u want subsets and you wl bw done (if u want to check by giving your own values).
def subsetsum(cs,k,r,x,w,d):
x[k]=1
if(cs+w[k]==d):
for i in range(0,k+1):
if x[i]==1:
print (w[i],end=" ")
print()
elif cs+w[k]+w[k+1]<=d :
subsetsum(cs+w[k],k+1,r-w[k],x,w,d)
if((cs +r-w[k]>=d) and (cs+w[k]<=d)) :
x[k]=0
subsetsum(cs,k+1,r-w[k],x,w,d)
#driver for the above code
w=[2,3,4,5,0]
x=[0,0,0,0,0]
subsetsum(0,0,sum(w),x,w,7)