How to align two lists of numbers - python

I have two sorted lists of numbers A and B with B being at least as long as A. Say:
A = [1.1, 2.3, 5.6, 5.7, 10.1]
B = [0, 1.9, 2.4, 2.7, 8.4, 9.1, 10.7, 11.8]
I want to associate each number in A with a different number in B but preserving order. For any such mapping we define the total distance to be the sum of the squared distances between mapped numbers.
For example:
If we map 1.1 to 0 0 then 2.3 can be mapped to any number from 1.9 onwards. But if we had mapped 1.1 to 2.7, then 2.3 could only be mapped to a number in B from 8.4 onwards.
Say we map 1.1->0, 2.3->1.9, 5.6->8.4, 5.7->9.1, 10.1->10.7. This is a valid mapping and has distance (1.1^2+0.4^2+2.8^2+3.4^2+0.6^2).
Another example to show a greedy approach will not work:
A = [1, 2]
B = [0, 1, 10000]
If we map 1->1 then we have to map 2->10000 which is bad.
The task is to find the valid mapping with minimal total distance.
Is hard to do? I am interested in a method that is fast when the lists are of length a few thousand.

And here is a O(n) solution! (This is the original attempt, see below for a fixed version.)
The idea is as follows. We first solve the problem for every other element, turn that into a very close solution, then use dynamic programming to find the real solution. This is solving a problem that is half the size first, followed by O(n) work. Using the fact that x + x/2 + x/4 + ... = 2x this turns out to be O(n) work.
This very, very much requires sorted lists. And doing a band that is 5 across is overkill, it very much looks like a band that is 3 across always gives the right answer, but I wasn't confident enough to go with that.
def improve_matching (list1, list2, matching):
# We do DP forward, trying a band that is 5 across, building up our
# answer as a linked list. If our answer changed by no more than 1
# anywhere, we are done. Else we recursively improve again.
best_j_last = -1
last = {-1: (0.0, None)}
for i in range(len(list1)):
best_j = None
best_cost = None
this = {}
for delta in (-2, 2, -1, 1, 0):
j = matching[i] + delta
# Bounds sanity checks.
if j < 0:
continue
elif len(list2) <= j:
continue
j_prev = best_j_last
if j <= j_prev:
if j-1 in last:
j_prev = j-1
else:
# Can't push back this far.
continue
cost = last[j_prev][0] + (list1[i] - list2[j])**2
this[j] = (cost, [j, last[j_prev][1]])
if (best_j is None) or cost <= best_cost:
best_j = j
best_cost = cost
best_j_last = best_j
last = this
(final_cost, linked_list) = last[best_j_last]
matching_rev = []
while linked_list is not None:
matching_rev.append( linked_list[0])
linked_list = linked_list[1]
matching_new = [x for x in reversed(matching_rev)]
for i in range(len(matching_new)):
if 1 < abs(matching[i] - matching_new[i]):
print "Improving further" # Does this ever happen?
return improve_matching(list1, list2, matching_new)
return matching_new
def match_lists (list1, list2):
if 0 == len(list1):
return []
elif 1 == len(list1):
best_j = 0
best_cost = (list1[0] - list2[0])**2
for j in range(1, len(list2)):
cost = (list1[0] - list2[j])**2
if cost < best_cost:
best_cost = cost
best_j = j
return [best_j]
elif 1 < len(list1):
# Solve a smaller problem first.
list1_smaller = [list1[2*i] for i in range((len(list1)+1)//2)]
list2_smaller = [list2[2*i] for i in range((len(list2)+1)//2)]
matching_smaller = match_lists(list1_smaller, list2_smaller)
# Start with that matching.
matching = [None] * len(list1)
for i in range(len(matching_smaller)):
matching[2*i] = 2*matching_smaller[i]
# Fill in the holes between
for i in range(len(matching) - 1):
if matching[i] is None:
best_j = matching[i-1] + 1
best_cost = (list1[i] - list2[best_j])**2
for j in range(best_j+1, matching[i+1]):
cost = (list1[i] - list2[j])**2
if cost < best_cost:
best_cost = cost
best_j = j
matching[i] = best_j
# And fill in the last one if needed
if matching[-1] is None:
if matching[-2] + 1 == len(list2):
# This will be an invalid matching, but improve will fix that.
matching[-1] = matching[-2]
else:
best_j = matching[-2] + 1
best_cost = (list1[-2] - list2[best_j])**2
for j in range(best_j+1, len(list2)):
cost = (list1[-1] - list2[j])**2
if cost < best_cost:
best_cost = cost
best_j = j
matching[-1] = best_j
# And now improve.
return improve_matching(list1, list2, matching)
def best_matching (list1, list2):
matching = match_lists(list1, list2)
cost = 0.0
result = []
for i in range(len(matching)):
pair = (list1[i], list2[matching[i]])
result.append(pair)
cost = cost + (pair[0] - pair[1])**2
return (cost, result)
UPDATE
There is a bug in the above. It can be demonstrated with match_lists([1, 3], [0, 0, 0, 0, 0, 1, 3]). However the solution below is also O(n) and I believe has no bugs. The difference is that instead of looking for a band of fixed width, I look for a band of width dynamically determined by the previous matching. Since no more than 5 entries can look to match at any given spot, it again winds up O(n) for this array and a geometrically decreasing recursive call. But long stretches of the same value cannot cause a problem.
def match_lists (list1, list2):
prev_matching = []
if 0 == len(list1):
# Trivial match
return prev_matching
elif 1 < len(list1):
# Solve a smaller problem first.
list1_smaller = [list1[2*i] for i in range((len(list1)+1)//2)]
list2_smaller = [list2[2*i] for i in range((len(list2)+1)//2)]
prev_matching = match_lists(list1_smaller, list2_smaller)
best_j_last = -1
last = {-1: (0.0, None)}
for i in range(len(list1)):
lowest_j = 0
highest_j = len(list2) - 1
if 3 < i:
lowest_j = 2 * prev_matching[i//2 - 2]
if i + 4 < len(list1):
highest_j = 2 * prev_matching[i//2 + 2]
if best_j_last == highest_j:
# Have to push it back.
best_j_last = best_j_last - 1
best_cost = last[best_j_last][0] + (list1[i] - list2[highest_j])**2
best_j = highest_j
this = {best_j: (best_cost, [best_j, last[best_j_last][1]])}
# Now try the others.
for j in range(lowest_j, highest_j):
prev_j = best_j_last
if j <= prev_j:
prev_j = j - 1
if prev_j not in last:
continue
else:
cost = last[prev_j][0] + (list1[i] - list2[j])**2
this[j] = (cost, [j, last[prev_j][1]])
if cost < best_cost:
best_cost = cost
best_j = j
last = this
best_j_last = best_j
(final_cost, linked_list) = last[best_j_last]
matching_rev = []
while linked_list is not None:
matching_rev.append( linked_list[0])
linked_list = linked_list[1]
matching_new = [x for x in reversed(matching_rev)]
return matching_new
def best_matching (list1, list2):
matching = match_lists(list1, list2)
cost = 0.0
result = []
for i in range(len(matching)):
pair = (list1[i], list2[matching[i]])
result.append(pair)
cost = cost + (pair[0] - pair[1])**2
return (cost, result)
Note
I was asked to explain why this works.
Here is my heuristic understanding. In the algorithm we solve the half-problem. Then we have to solve the full problem.
The question is how far can an optimal solution for the full problem be forced to be from the optimal solution to the half problem? We push it to the right by having every element in list2 that wasn't in the half problem be large as possible, and every element in list1 that wasn't in the half problem be small as possible. But if we shove the ones from the half problem to the right, and put the duplicate elements where they were then modulo boundary effects, we've got 2 optimal solutions to the half problem and nothing moved by more than to where the next element right was in the half problem. Similar reasoning applies to trying to force the solution left.
Now let's discuss those boundary effects. Those boundary effects are at the end by 1 element. So when we try to shove an element off the end, we can't always. By looking 2 elements instead of 1 over, we add enough wiggle room to account for that as well.
Hence there has to be an optimal solution that is fairly close to the half problem doubled in an obvious way. There may be others, but there is at least one. And the DP step will find it.
I would need to do some work to capture this intuition into a formal proof, but I'm confident that it could be done.

Here's a recursive solution. Pick the middle element of a; map that to each possible element of b (leave enough on each end to accommodate the left and right sections of a). For each such mapping, compute the single-element cost; then recur on each of the left and right fragments of a and b.
Here's the code; I'll leave memoization as an exercise for the student.
test_case = [
[ [1, 2], [0, 1, 10] ],
[ [1.1, 2.3, 5.6, 5.7, 10.1], [0, 1.9, 2.4, 2.7, 8.4, 9.1, 10.7, 11.8] ],
]
import math
indent = ""
def best_match(a, b):
"""
Find the best match for elements in a mapping to b, preserving order
"""
global indent
indent += " "
# print(indent, "ENTER", a, b)
best_cost = math.inf
best_map = []
if len(a) == 0:
best_cost = 0
best_map = []
else:
# Match the middle element of `a` to each eligible element of `b`
a_midpt = len(a) // 2
a_elem = a[a_midpt]
l_margin = a_midpt
r_margin = a_midpt + len(b) - len(a)
for b_pos in range(l_margin, r_margin+1):
# For each match ...
b_elem = b[b_pos]
# print(indent, "TRACE", a_elem, b_elem)
# ... compute the element cost ...
mid_cost = (a_elem - b_elem)**2
# ... and recur for similar alignments on left & right list fragments
l_cost, l_map = best_match(a[:l_margin], b[:b_pos])
r_cost, r_map = best_match(a[l_margin+1:], b[b_pos+1:])
# Check total cost against best found; keep the best
cand_cost = l_cost + mid_cost + r_cost
# print(indent, " COST", mid_cost, l_cost, r_cost)
if cand_cost < best_cost:
best_cost = cand_cost
best_map = l_map[:] + [(a_elem, b_elem)]
best_map.extend(r_map[:])
# print(indent, "LEAVE", best_cost, best_map)
return best_cost, best_map
for a, b in test_case:
print('\n', a, b)
print(best_match(a, b))
Output:
a = [1, 2]
b = [0, 1, 10]
2 [(1, 0), (2, 1)]
a = [1.1, 2.3, 5.6, 5.7, 10.1]
b = [0, 1.9, 2.4, 2.7, 8.4, 9.1, 10.7, 11.8]
16.709999999999997 [(1.1, 1.9), (2.3, 2.4), (5.6, 2.7), (5.7, 8.4), (10.1, 10.7)]

For giggles and grins, here is what is hopefully a much faster solution than either of the other working ones. The idea is simple. First we do a greedy match left to right. Then a greedy match right to left. This gives us bounds on where each element can go. Then we can do a DP solution left to right only looking at possible values.
If the greedy approaches agree, this will take linear time. If the greedy approaches are very far apart, this can take quadratic time. But the hope is that the greedy approaches produce reasonably close results, resulting in close to linear performance.
def match_lists(list1, list2):
# First we try a greedy matching from left to right.
# This gives us, for each element, the last place it could
# be forced to match. (It could match later, for instance
# in a run of equal values in list2.)
match_last = []
j = 0
for i in range(len(list1)):
while True:
if len(list2) - j <= len(list1) - i:
# We ran out of room.
break
elif abs(list2[j+1] - list1[i]) <= abs(list2[j] - list1[i]):
# Take the better value
j = j + 1
else:
break
match_last.append(j)
j = j + 1
# Next we try a greedy matching from right to left.
# This gives us, for each element, the first place it could be
# forced to match.
# We build it in reverse order, then reverse.
match_first_rev = []
j = len(list2) - 1
for i in range(len(list1) - 1, -1, -1):
while True:
if j <= i:
# We ran out of room
break
elif abs(list2[j-1] - list1[i]) <= abs(list2[j] - list1[i]):
# Take the better value
j = j - 1
else:
break
match_first_rev.append(j)
j = j - 1
match_first = [x for x in reversed(match_first_rev)]
# And now we do DP forward, building up our answer as a linked list.
best_j_last = -1
last = {-1: (0.0, None)}
for i in range(len(list1)):
# We initialize with the last position we could choose.
best_j = match_last[i]
best_cost = last[best_j_last][0] + (list1[i] - list2[best_j])**2
this = {best_j: (best_cost, [best_j, last[best_j_last][1]])}
# Now try the rest of the range of possibilities
for j in range(match_first[i], match_last[i]):
j_prev = best_j_last
if j <= j_prev:
j_prev = j - 1 # Push back to the last place we could match
cost = last[j_prev][0] + (list1[i] - list2[j])**2
this[j] = (cost, [j, last[j_prev][1]])
if cost < best_cost:
best_cost = cost
best_j = j
last = this
best_j_last = best_j
(final_cost, linked_list) = last[best_j_last]
matching_rev = []
while linked_list is not None:
matching_rev.append(
(list1[len(matching_rev)], list2[linked_list[0]]))
linked_list = linked_list[1]
matching = [x for x in reversed(matching_rev)]
return (final_cost, matching)
print(match_lists([1.1, 2.3, 5.6, 5.7, 10.1], [0, 1.9, 2.4, 2.7, 8.4, 9.1, 10.7, 11.8]))

Python is not very friendly with recursion so attempting to apply it to a list of thousands of elements might not fair so well. Here is a bottom-up approach that takes advantage of the optimal solution for any a from A as we increase the index for its potential partner from B being non-decreasing. (Works for both sorted and non-sorted input.)
def f(A, B):
m = [[(float('inf'), -1) for b in B] for a in A]
for i in xrange(len(A)):
for j in xrange(i, len(B) - len(A) + i + 1):
d = (A[i] - B[j]) ** 2
if i == 0:
if j == i:
m[i][j] = (d, j)
elif d < m[i][j-1][0]:
m[i][j] = (d, j)
else:
m[i][j] = m[i][j-1]
# i > 0
else:
candidate = d + m[i-1][j-1][0]
if j == i:
m[i][j] = (candidate, j)
else:
if candidate < m[i][j-1][0]:
m[i][j] = (candidate, j)
else:
m[i][j] = m[i][j-1]
result = m[len(A)-1][len(B)-1][0]
# Backtrack
lst = [None for a in A]
j = len(B) - 1
for i in xrange(len(A)-1, -1, -1):
j = m[i][j][1]
lst[i] = j
j = j - 1
return (result, [(A[i], B[j]) for i, j in enumerate(lst)])
A = [1, 2]
B = [0, 1, 10000]
print f(A, B)
print ""
A = [1.1, 2.3, 5.6, 5.7, 10.1]
B = [0, 1.9, 2.4, 2.7, 8.4, 9.1, 10.7, 11.8]
print f(A, B)
Output:
(2, [(1, 0), (2, 1)])
(16.709999999999997, [(1.1, 1.9), (2.3, 2.4), (5.6, 2.7), (5.7, 8.4), (10.1, 10.7)])
Update
Here's an O(|B|) space implementation. I'm not sure if this still offers a way to backtrack to get the mapping but I'm working on it.
def f(A, B):
m = [(float('inf'), -1) for b in B]
m1 = [(float('inf'), -1) for b in B] # m[i-1]
for i in xrange(len(A)):
for j in xrange(i, len(B) - len(A) + i + 1):
d = (A[i] - B[j]) ** 2
if i == 0:
if j == i:
m[j] = (d, j)
elif d < m[j-1][0]:
m[j] = (d, j)
else:
m[j] = m[j-1]
# i > 0
else:
candidate = d + m1[j-1][0]
if j == i:
m[j] = (candidate, j)
else:
if candidate < m[j-1][0]:
m[j] = (candidate, j)
else:
m[j] = m[j-1]
m1 = m
m = m[:len(B) - len(A) + i + 1] + [(float('inf'), -1)] * (len(A) - i - 1)
result = m1[len(B)-1][0]
# Backtrack
# This doesn't work as is
# to get the mapping
lst = [None for a in A]
j = len(B) - 1
for i in xrange(len(A)-1, -1, -1):
j = m1[j][1]
lst[i] = j
j = j - 1
return (result, [(A[i], B[j]) for i, j in enumerate(lst)])
A = [1, 2]
B = [0, 1, 10000]
print f(A, B)
print ""
A = [1.1, 2.3, 5.6, 5.7, 10.1]
B = [0, 1.9, 2.4, 2.7, 8.4, 9.1, 10.7, 11.8]
print f(A, B)
import random
import time
A = [random.uniform(0, 10000.5) for i in xrange(10000)]
B = [random.uniform(0, 10000.5) for i in xrange(15000)]
start = time.time()
print f(A, B)[0]
end = time.time()
print(end - start)

Related

I am trying to batch some lists according to each list's weight in Python

The idea is to group the products in list o while making sure that upon grouping them they do not exceed the cap given by looking at the list called w. currently running this code gives me an "index out of range" but since I actively change the main list, is there like a way I can update it in? Is there another way? I would really appreciate all the help I can get
from pandas.core.common import flatten
w = [15.0, 5.0, 5.0,2,4]
o = [[3, [1, 2]], 4, 5,6,7]
M = [[5, 5, 5], [5], [5],[2],[6]]
cap = 15
LEN = len(o)
for i in range(1,LEN):
if len(o) > 2:
for j in range(1,LEN):
if i != j and w[i] + w[j] <= cap:
O1 = o[i]
O2 = o[j]
W1 = w[i]
W2 = w[j]
M1 = M[i]
M2 = M[j]
print("wow",i,j)
TUP = [o[i],o[j]]
o.remove(O1)
o.remove(O2)
o.append(TUP)
WTEMP = w[i] + w[j]
w.remove(W1)
w.remove(W2)
w.append(WTEMP)
MTEMP = [M1,M2]
MTEMP = list(flatten(MTEMP))
M.remove(M1)
M.remove(M2)
M.append(MTEMP)
print(w,o,M)
You're trying to iterate over w but you're doing it based on the size of o which is never guaranteed to be the same size. This is causing your issue.
You can add extra guards to the if condition to ensure that you're not going to overstep the bounds of w.
for i in range(1,LEN):
if len(o) > 2:
for j in range(1,LEN):
if i != j and i < len(w) and j < len(w) and w[i] + w[j] <= cap:
...

Trying to find the optimal subset for the Greedy knapsack problem(python)

I think this is the correct algorithm for finding the optimal value, but now i need to find the optimal subsets that got me that value. Help would be greatly appreciated!
These were my directions:
Implement a greedy algorithm that arranges the items in the decreasing order of value to weight ratio (vi/wi for i = 1, 2, ..., n), then select the items in this order until the weight of the next item exceeds the remaining capacity (Note: In this greedy version, we stop right after the first item whose inclusion would exceed the knapsack capacity).
def greedy_knapsack(val, weight, W, n):
# index = [0, 1, 2, ..., n - 1] for n items
index = list(range(len(val)))
# contains ratios of values to weight
ratio = [v / w for v, w in zip(val, weight)]
QuickSort(ratio, 0, len(ratio) - 1)
max_value = 0
for i in index:
if weight[i] <= W:
max_value += val[i]
W -= weight[i]
else:
max_value += val[i] * W // weight[i]
break
return max_value
Your greedy approach will fail in many cases.
One such trivial case:
weight = [10, 10, 10]
value = [5, 4, 3]
W = 7
In this case, your algorithm will choose (item 1) sum = 5, but the optimal answer should be (items 2 and 3), sum = 7.
You need a dynamic programming approach to solve this and you can keep a matrix to store your previous states so that you can reconstruct the solution and get the item list.
# Prints the items which are put in a
# knapsack of capacity W
def printknapSack(W, wt, val, n):
K = [[0 for w in range(W + 1)]
for i in range(n + 1)]
# Build table K[][] in bottom
# up manner
for i in range(n + 1):
for w in range(W + 1):
if i == 0 or w == 0:
K[i][w] = 0
elif wt[i - 1] <= w:
K[i][w] = max(val[i - 1]
+ K[i - 1][w - wt[i - 1]],
K[i - 1][w])
else:
K[i][w] = K[i - 1][w]
# stores the result of Knapsack
res = K[n][W]
print(res)
w = W
for i in range(n, 0, -1):
if res <= 0:
break
# either the result comes from the
# top (K[i-1][w]) or from (val[i-1]
# + K[i-1] [w-wt[i-1]]) as in Knapsack
# table. If it comes from the latter
# one/ it means the item is included.
if res == K[i - 1][w]:
continue
else:
# This item is included.
print(wt[i - 1])
# Since this weight is included
# its value is deducted
res = res - val[i - 1]
w = w - wt[i - 1]
# Driver code
val = [ 60, 100, 120 ]
wt = [ 10, 20, 30 ]
W = 50
n = len(val)
printknapSack(W, wt, val, n)
ref: https://www.geeksforgeeks.org/printing-items-01-knapsack/

Python Quicksort implementation

I tried to implement the recursive quicksort in Python, but it doesn't work. I know that there is the problem that the array doesn't get sorted because the pivot is always higher than i, which results in the problem that i is always equals to m.
def partition(array):
pivot = array[-1]
m = 0
for i in range(len(array) - 1):
if array[i] < pivot:
array[i], array[m] = array[m], array[i]
m += 1
else:
continue
array[m], array[len(array)-1] = array[len(array)-1], array[m]
return m
def quicksort(array):
if len(array) > 1:
m = partition(array)
quicksort(array[:m])
quicksort(array[m+1:])
return array
def main():
testarray = [3,6,2,4,5,1,9,8,7,10,14]
print(quicksort(testarray))
if __name__ == '__main__':
main()
Two things. Firstly, you forgot to return array when it's of length 1, and secondly you aren't actually modifying array before returning. This will work.
def quicksort(array):
if len(array) > 1:
m = partition(array)
# return the concatenation of the two sorted arrays
return quicksort(array[:m]) + quicksort(array[m:])
else:
return array
For those looking for an iterative/non-recursive version of Quicksort, here's an implementation I came up with in Python:
from random import randint
def default_comparator_fn(a, b):
return -1 if a < b else (1 if a > b else 0)
def reverse_comparator_fn(a, b):
return default_comparator_fn(b, a)
def quick_sort(A, comparator_fn=default_comparator_fn):
n = len(A)
if n < 2:
# The list has only 1 element or does not have any.
return A
# There are at least 2 elements.
partitions = [[0, n - 1]] # [[start, end]]
while len(partitions):
partition = partitions.pop()
start = partition[0]
end = partition[1]
pivot_index = randint(start, end)
pivot = A[pivot_index]
A[pivot_index], A[start] = A[start], A[pivot_index]
breakpoint_index = start
k = start + 1
m = end
while k <= m:
res = comparator_fn(A[k], pivot)
if res < 0:
breakpoint_index = k
else:
while m > k:
res = comparator_fn(A[m], pivot)
if res < 0:
breakpoint_index = k
A[m], A[k] = A[k], A[m]
m -= 1
break
m -= 1
k += 1
A[start], A[breakpoint_index] = A[breakpoint_index], A[start]
if start < breakpoint_index - 1:
partitions.append([start, breakpoint_index - 1])
if breakpoint_index + 1 < end:
partitions.append([breakpoint_index + 1, end])
return A
# Example:
A = [4, 2, 5, 1, 3]
quick_sort(A) # Sort in ascending order ([1, 2, 3, 4, 5]).
quick_sort(A, reverse_comparator_fn) # Sort in descending order ([5, 4, 3, 2, 1]).
This implementation of Quicksort accepts an optional custom comparator function which defaults to a comparator which compares the elements of the list in ascending order.

Dynamic Programming: Rod cutting and remembering where cuts are made

So I have this code in python and currently it only returns the maximum value for cutting a rod. How can I modify this to also give me where the cuts were made? It takes a list of prices whose indices+1 correspond to the value of the rod at each length, and n, for length of the rod.
the problem:http://www.radford.edu/~nokie/classes/360/dp-rod-cutting.html
def cutRod(price, n):
val = [0 for x in range(n+1)]
val[0] = 0
for i in range(1, n+1):
max_val = 0
for j in range(i):
max_val = max(max_val, price[j] + val[i-j-1])
val[i] = max_val
return val[n]
If this is the question : Rod cutting
Assuming code works fine, You will have to add a condition instead of Max operation to check which of two was picked and push that one in an array :
def cutRod(price, n):
val = [0 for x in range(n+1)]
val[0] = 0
output = list()
for i in range(1, n+1):
max_val = 0
cur_max_index = -1
for j in range(i):
cur_val = price[j] + val[i-j-1]
if(cur_val>max_val):
max_val = cur_val #store current max
cur_max_index = j #and index
if cur_max_index != -1:
output.append(cur_max_index) #append in output index list
val[i] = max_val
print(output) #print array
return val[n]
I know this is old but just in case someone else has a look...I was actually just looking at this problem. I think the issue is here that these dp problems can be tricky when handling indices. The previous answer is not going to print the solution correctly simply because this line needs to be adjusted...
cur_max_index = j which should be cur_max_index = j + 1
The rest...
def cut_rod(prices, length):
values = [0] * (length + 1)
cuts = [-1] * (length + 1)
max_val = -1
for i in range(1, length + 1):
for j in range(i):
temp = prices[j] + values[i - j - 1]
if temp > max_val:
max_val = prices[j] + values[i - j - 1]
cuts[i] = j + 1
values[i] = max_val
return values[length], cuts
def print_cuts(cuts, length):
while length > 0:
print(cuts[length], end=" ")
length -= cuts[length]
max_value, cuts = cut_rod(prices, length)
print(max_value)
print_cuts(cuts, length)
Well, if you need to get the actual pieces that would be the result of this process then you'd probably need a recursion.
For example something like that:
def cutRod(price, n):
val = [0 for x in range(n + 1)]
pieces = [[0, 0]]
val[0] = 0
for i in range(1, n + 1):
max_val = 0
max_pieces = [0, 0]
for j in range(i):
curr_val = price[j] + val[i - j - 1]
if curr_val > max_val:
max_val = curr_val
max_pieces = [j + 1, i - j - 1]
pieces.append(max_pieces)
val[i] = max_val
arr = []
def f(left, right):
if right == 0:
arr.append(left)
return
f(pieces[left][0], pieces[left][1])
f(pieces[right][0], pieces[right][1])
f(pieces[n][0], pieces[n][1])
return val[n], arr
In this code, there is an additional array for pieces which represents the best way to divide our Rod with some length.
Besides, there is a function f that goes through all pieces and figures out the optimal way to divide the whole Rod.

how to diff / align Python lists using arbitrary matching function?

I'd like to align two lists in a similar way to what difflib.Differ would do except I want to be able to define a match function for comparing items, not just use string equality, and preferably a match function that can return a number between 0.0 and 1.0, not just a boolean.
So, for example, say I had the two lists:
L1 = [('A', 1), ('B', 3), ('C', 7)]
L2 = ['A', 'b', 'C']
and I want to be able to write a match function like this:
def match(item1, item2):
if item1[0] == item2:
return 1.0
elif item1[0].lower() == item2.lower():
return 0.5
else:
return 0.0
and then do:
d = Differ(match_func=match)
d.compare(L1, L2)
and have it diff using the match function. Like difflib, I'd rather the algorithm gave more intuitive Ratcliff-Obershelp type results rather than a purely minimal Levenshtein distance.
I just wrote this implementation of Needleman-Wunsch and it seems to do what I want:
def nw_align(a, b, replace_func, insert, delete):
ZERO, LEFT, UP, DIAGONAL = 0, 1, 2, 3
len_a = len(a)
len_b = len(b)
matrix = [[(0, ZERO) for x in range(len_b + 1)] for y in range(len_a + 1)]
for i in range(len_a + 1):
matrix[i][0] = (insert * i, UP)
for j in range(len_b + 1):
matrix[0][j] = (delete * j, LEFT)
for i in range(1, len_a + 1):
for j in range(1, len_b + 1):
replace = replace_func(a[i - 1], b[j - 1])
matrix[i][j] = max([
(matrix[i - 1][j - 1][0] + replace, DIAGONAL),
(matrix[i][j - 1][0] + insert, LEFT),
(matrix[i - 1][j][0] + delete, UP)
])
i, j = len_a, len_b
align_a = ""
align_b = ""
while (i, j) != (0, 0):
if matrix[i][j][1] == DIAGONAL:
align_a += a[i - 1]
align_b += b[j - 1]
i -= 1
j -= 1
elif matrix[i][j][1] == LEFT:
align_a += "-"
align_b += b[j - 1]
j -= 1
else: # UP
align_a += a[i - 1]
align_b += "-"
i -= 1
return align_a[::-1], align_b[::-1]
I recently ran across a discussion of an algorithm called patience diff that sounds rather simple. You could try implementing that yourself, and then of course you can have it use whatever comparison algorithm you like.

Categories

Resources