I want to generate combinations from the list without considering the adjacent elements.
I have tried a code which provides combinations without considering adjacent elements, and it works with unique elements in the list.
But it does not work with repeat elements in the list Eg. [4,5,4,3]
Code:
import itertools
b = []
stuff = [4,5,4,3]
for L in range(2, len(stuff)+1):
for subset in itertools.combinations(stuff, L):
a =list(subset)
for i in range(1,len(a)):
if stuff.index(a[i-1]) == stuff.index(a[i])-1:
a.clear()
break
else:
b.append(a)
print('b = ',b)
Expected result = [[4,4],[4,3],[5,3]]
Actual result = [[4, 4], [4, 3], [5, 4], [5, 3], [4, 3], [4, 4, 3], [4, 4, 3], [5, 4, 3], [5, 4, 3]]
I can explain with example: Suppose list is [1,2,3,4,5], then the possible non adjacent combinations are [[1,3],[1,4],[1,5],[2,4],[2,5],[3,5],[1,3,5]]. I want these combinations. The code which I am trying works well with unique set but when there is repetition of numbers in the given list such as [1,3,2,3,2,5], then while taking index it always take first 3 and not other one. So how to get the combinations from this set
Instead of generating all the itertools.combinations and then filtering out the valid ones with index, which (a) is very inefficient and (b) does not work with duplicate elements, you should implement your own combinations algorithm, which is not too hard at all, and might look somewhat like this:
def comb(lst, num):
if num == 0:
yield []
if 0 < num <= len(lst):
first, *rest = lst
for c in comb(rest, num-1):
yield [first] + c
for c in comb(rest, num):
yield c
To add the "no adjacent elements" constraint, simply keep track of whether you took the last element, and only add the next element if this is not the case:
def comb_no_adj(lst, num, last=False):
if num == 0:
yield []
if 0 < num <= len(lst):
first, *rest = lst
if not last:
for c in comb_no_adj(rest, num-1, True):
yield [first] + c
for c in comb_no_adj(rest, num, False):
yield c
Example combinations for comb_no_adj([1,2,3,4,5,6], 3) are [1, 3, 5], [1, 3, 6], [1, 4, 6], [2, 4, 6] (This example does not contain duplicates, simply for the sake of being easier to understand; since this algo does not use index, duplicate elements are not an issue.)
Update: In fact, first generating all the combinations and then filtering invalid ones can not work. Consider this example: [1,1,1]. All combinations with two elements would be [1,1], [1,1], [1,1] (the first and second, first and third, and second and third 1). How would you decide which of those to keep and which to discard? And it gets worse for [1,1,1,1]. (You could generate all combinations of element-index-pairs and then filter those, though, but due to the large number of combinations that will be filtered out anyway this would still be less efficient.)
Related
I've found an algorithm that will sort through a list and display each value of the list of sets and find all sets that do not overlap.
Example
c = [[1,2,3],[4,3,2],[4,5,6],[7,8,9]]
for a in range(0, len(c)):
for b in range(o, len(c)):
if c[a] does not overlap c[b]:
new_list.append(c[a] and c[b]:
# example [1,2,3],[4,5,6]
if c[b] does not overlap all of new_list:
# example can't use [4,3,2] because 4 is already in one of the sets
new_list.append([7,9,8])
Intended output
[1,2,3],[4,5,6],[7,8,9]
Please don't worry about [4,3,2],[7,8,9]. I intend to use this for
loop in a while loop for the other indices later.
Question
Is there any pre-existing sorting algorithm in python that will do as I intend?
Here's a version of your algorithm that uses a set for checking whether values in a list have been seen before:
c = [[1,2,3],[4,3,2],[4,5,6],[9,5,8],[7,8,9]]
new = []
s = set()
for l in c:
if not any(v in s for v in l):
new.append(l)
s.update(l)
print(new)
Output:
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
I have a python list where elements can repeat.
>>> a = [1,2,2,3,3,4,5,6]
I want to get the first n unique elements from the list.
So, in this case, if i want the first 5 unique elements, they would be:
[1,2,3,4,5]
I have come up with a solution using generators:
def iterate(itr, upper=5):
count = 0
for index, element in enumerate(itr):
if index==0:
count += 1
yield element
elif element not in itr[:index] and count<upper:
count += 1
yield element
In use:
>>> i = iterate(a, 5)
>>> [e for e in i]
[1,2,3,4,5]
I have doubts on this being the most optimal solution. Is there an alternative strategy that i can implement to write it in a more pythonic and efficient
way?
I would use a set to remember what was seen and return from the generator when you have seen enough:
a = [1, 2, 2, 3, 3, 4, 5, 6]
def get_unique_N(iterable, N):
"""Yields (in order) the first N unique elements of iterable.
Might yield less if data too short."""
seen = set()
for e in iterable:
if e in seen:
continue
seen.add(e)
yield e
if len(seen) == N:
return
k = get_unique_N([1, 2, 2, 3, 3, 4, 5, 6], 4)
print(list(k))
Output:
[1, 2, 3, 4]
According to PEP-479 you should return from generators, not raise StopIteration - thanks to #khelwood & #iBug for that piece of comment - one never learns out.
With 3.6 you get a deprecated warning, with 3.7 it gives RuntimeErrors: Transition Plan if still using raise StopIteration
Your solution using elif element not in itr[:index] and count<upper: uses O(k) lookups - with k being the length of the slice - using a set reduces this to O(1) lookups but uses more memory because the set has to be kept as well. It is a speed vs. memory tradeoff - what is better is application/data dependend.
Consider [1, 2, 3, 4, 4, 4, 4, 5] vs [1] * 1000 + [2] * 1000 + [3] * 1000 + [4] * 1000 + [5] * 1000 + [6]:
For 6 uniques (in longer list):
you would have lookups of O(1)+O(2)+...+O(5001)
mine would have 5001*O(1) lookup + memory for set( {1, 2, 3, 4, 5, 6})
You can adapt the popular itertools unique_everseen recipe:
def unique_everseen_limit(iterable, limit=5):
seen = set()
seen_add = seen.add
for element in iterable:
if element not in seen:
seen_add(element)
yield element
if len(seen) == limit:
break
a = [1,2,2,3,3,4,5,6]
res = list(unique_everseen_limit(a)) # [1, 2, 3, 4, 5]
Alternatively, as suggested by #Chris_Rands, you can use itertools.islice to extract a fixed number of values from a non-limited generator:
from itertools import islice
def unique_everseen(iterable):
seen = set()
seen_add = seen.add
for element in iterable:
if element not in seen:
seen_add(element)
yield element
res = list(islice(unique_everseen(a), 5)) # [1, 2, 3, 4, 5]
Note the unique_everseen recipe is available in 3rd party libraries via more_itertools.unique_everseen or toolz.unique, so you could use:
from itertools import islice
from more_itertools import unique_everseen
from toolz import unique
res = list(islice(unique_everseen(a), 5)) # [1, 2, 3, 4, 5]
res = list(islice(unique(a), 5)) # [1, 2, 3, 4, 5]
If your objects are hashable (ints are hashable) you can write utility function using fromkeys method of collections.OrderedDict class (or starting from Python3.7 a plain dict, since they became officially ordered) like
from collections import OrderedDict
def nub(iterable):
"""Returns unique elements preserving order."""
return OrderedDict.fromkeys(iterable).keys()
and then implementation of iterate can be simplified to
from itertools import islice
def iterate(itr, upper=5):
return islice(nub(itr), upper)
or if you want always a list as an output
def iterate(itr, upper=5):
return list(nub(itr))[:upper]
Improvements
As #Chris_Rands mentioned this solution walks through entire collection and we can improve this by writing nub utility in a form of generator like others already did:
def nub(iterable):
seen = set()
add_seen = seen.add
for element in iterable:
if element in seen:
continue
yield element
add_seen(element)
Here is a Pythonic approach using itertools.takewhile():
In [95]: from itertools import takewhile
In [96]: seen = set()
In [97]: set(takewhile(lambda x: seen.add(x) or len(seen) <= 4, a))
Out[97]: {1, 2, 3, 4}
You can use OrderedDict or, since Python 3.7, an ordinary dict, since they are implemented to preserve the insertion order. Note that this won't work with sets.
N = 3
a = [1, 2, 2, 3, 3, 3, 4]
d = {x: True for x in a}
list(d.keys())[:N]
There are really amazing answers for this question, which are fast, compact and brilliant! The reason I am putting here this code is that I believe there are plenty of cases when you don't care about 1 microsecond time loose nor you want additional libraries in your code for one-time solving a simple task.
a = [1,2,2,3,3,4,5,6]
res = []
for x in a:
if x not in res: # yes, not optimal, but doesnt need additional dict
res.append(x)
if len(res) == 5:
break
print(res)
Assuming the elements are ordered as shown, this is an opportunity to have fun with the groupby function in itertools:
from itertools import groupby, islice
def first_unique(data, upper):
return islice((key for (key, _) in groupby(data)), 0, upper)
a = [1, 2, 2, 3, 3, 4, 5, 6]
print(list(first_unique(a, 5)))
Updated to use islice instead of enumerate per #juanpa.arrivillaga. You don't even need a set to keep track of duplicates.
Using set with sorted+ key
sorted(set(a), key=list(a).index)[:5]
Out[136]: [1, 2, 3, 4, 5]
Given
import itertools as it
a = [1, 2, 2, 3, 3, 4, 5, 6]
Code
A simple list comprehension (similar to #cdlane's answer).
[k for k, _ in it.groupby(a)][:5]
# [1, 2, 3, 4, 5]
Alternatively, in Python 3.6+:
list(dict.fromkeys(a))[:5]
# [1, 2, 3, 4, 5]
Profiling Analysis
Solutions
Which solution is the fastest? There are two clear favorite answers (and 3 solutions) that captured most of the votes.
The solution by Patrick Artner - denoted as PA.
The first solution by jpp - denoted as jpp1
The second solution by jpp - denoted as jpp2
This is because these claim to run in O(N) while others here run in O(N^2), or do not guarantee the order of the returned list.
Experiment setup
For this experiment 3 variables were considered.
N elements. The number of first N elements the function is searching for.
List length. The longer the list the further the algorithm has to look to find the last element.
Repeat limit. How many times an element can repeat before the next element occurs in the list. This is uniformly distributed between 1 and the repeat limit.
The assumptions for data generation were as follows. How strict these are depend on the algorithm used, but is more a note on how the data was generated than a limitation on the algorithms themselves.
The elements never occur again after its repeated sequence first appears in the list.
The elements are numeric and increasing.
The elements are of type int.
So in a list of [1,1,1,2,2,3,4 ....] 1,2,3 would never appear again. The next element after 4 would be 5, but there could be a random number of 4s up to the repeat limit before we see 5.
A new dataset was created for each combination of variables and and re-generated 20 times. The python timeit function was used to profile the algorithms 50 times on each dataset. The mean time of the 20x50=1000 runs (for each combination) were reported here. Since the algorithms are generators, their outputs were converted to a list to get the execution time.
Results
As is expected the more elements searched for, the longer it takes. This graph shows that the execution time is indeed O(N) as claimed by the authors (the straight line proves this).
Fig 1. Varying the first N elements searched for.
All three solutions do not consume additional computation time beyond that which is required. The below image shows what happens when the list is limited in size, and not N elements. Lists of length 10k, with elements repeating a maximum of 100 times (and thus on average repeating 50 times) would on average run out of unique elements by 200 (10000/50). If any of these graphs showed an increase in computation time beyond 200 this would be a cause for concern.
Fig 2. The effect of first N elements chosen > number of unique elements.
The figure below again shows that processing time increases (at a rate of O(N)) the more data the algorithm has to sift through. The rate of increase is the same as when first N elements were varied. This is because stepping through the list is the common execution block in both, and the execution block that ultimately decides how fast the algorithm is.
Fig 3. Varying the repeat limit.
Conclusion
The 2nd solution posted by jpp is the fastest solution of the 3 in all cases. The solution is only slightly faster than the solution posted by Patrick Artner, and is almost twice as fast as his first solution.
Why not use something like this?
>>> a = [1, 2, 2, 3, 3, 4, 5, 6]
>>> list(set(a))[:5]
[1, 2, 3, 4, 5]
Example list:
a = [1, 2, 2, 3, 3, 4, 5, 6]
Function returns all or count of unique items needed from list
1st argument - list to work with, 2nd argument (optional) - count of unique items (by default - None - it means that all unique elements will be returned)
def unique_elements(lst, number_of_elements=None):
return list(dict.fromkeys(lst))[:number_of_elements]
Here is example how it works. List name is "a", and we need to get 2 unique elements:
print(unique_elements(a, 2))
Output:
a = [1,2,2,3,3,4,5,6]
from collections import defaultdict
def function(lis,n):
dic = defaultdict(int)
sol=set()
for i in lis:
try:
if dic[i]:
pass
else:
sol.add(i)
dic[i]=1
if len(sol)>=n:
break
except KeyError:
pass
return list(sol)
print(function(a,3))
output
[1, 2, 3]
Given an array, I've found all the combinations of subsets that equal a targeted sum, that's because I want the largest array possible.
For instance, the array [1, 2, 2, 2] for the target sum of "4" returns [[2, 2], [2, 2], [2, 2]].
subsets = []
def subset_sum(numbers, target, partial=[]):
s = sum(partial)
if s == target:
subsets.append(partial)
if s >= target:
return
for i in range(len(numbers)):
n = numbers[i]
remaining = numbers[i + 1:]
subset_sum(remaining, target, partial + [n])
subsets.sort()
subsets.reversed()
How can I remove values that are once mentioned in the subsets' list?
In the example above, how can I hay only one [2,2].
And that, show the values of the initial array that are not in this final list?
In the example above [1].
You can use itertools.groupby to remove duplicate lists:
>>> import itertools
>>> lst = [[2, 2], [2, 2], [2, 2]]
>>> lst.sort()
>>> new_lst = list(k for k,_ in itertools.groupby(lst))
>>> print(new_lst)
[[2, 2]]
Then simply flatten new_lst with itertools.chain.from_iterable and check if any of the elements from the initial list do not exist in this flattened list:
>>> initial = [1,2,2,2]
>>> print([x for x in initial if x not in itertools.chain.from_iterable(new_lst)])
[1]
Note: You can probably make your subset_sum() return a list of non duplicate items also, but the above should also work fine.
This is not a direct answer to your question, but a better algorithm. If you're only looking for one example of a list of maximal length which satisfies your sum criterion, you should be looking at longer lists first. This code uses itertools for the combinatorial bits and will stop when the longest list is found.
numbers = [1, 2, 2, 2]
taget = 5
for size in reversed(range(1, 1 + len(numbers))):
for c in itertools.combinations(numbers, size):
if sum(c) == target:
break
else:
continue
break
c now contains the longest subset as a tuple (1, 2, 2)
You can do something like this:
Data is :
data=[1, 2, 2,2]
import itertools
your_target=4
One line solution:
print(set([k for k in itertools.combinations(data,r=2) if sum(k)==your_target]))
output:
{(2, 2)}
or better if you use a function:
def targeted_sum(data,your_target):
result=set([k for k in itertools.combinations(data,r=2) if sum(k)==your_target])
return result
print(targeted_sum(data,4))
I want to write a function that compares the first element of this list with the last element of this list, the second element of this list with the second last element of this list, and so on. If the compared elements are the same, I want to add the element to a new list. Finally, I'd like to print this new list.
For example,
>>> f([1,5,7,7,8,1])
[1,7]
>>> f([3,1,4,1,5]
[1,4]
>>> f([2,3,5,7,1,3,5])
[3,7]
I was thinking to take the first (i) and last (k) element, compare them, then raise i but lower k, then repeat the process. When i and k 'overlap', stop, and print the list. I've tried to visualise my thoughts in the following code:
def f(x):
newlist=[]
k=len(x)-1
i=0
for j in x:
if x[i]==x[k]:
if i<k:
newlist.append(x[i])
i=i+1
k=k-1
print(newlist)
Please let me know if there are any errors in my code, or if there is a more suitable way to address the problem.
As I am new to Python, I am not very good with understanding complicated terminology/features of Python. As such, it would be encouraged if you took this into account in your answer.
You could use a conditional list comprehension with enumerate, comparing the element x at index i to the element at index -1-i (-1 being the last index of the list):
>>> lst = [1,5,7,7,8,1]
>>> [x for i, x in enumerate(lst[:(len(lst)+1)//2]) if lst[-1-i] == x]
[1, 7]
>>> lst = [3,1,4,1,5]
>>> [x for i, x in enumerate(lst[:(len(lst)+1)//2]) if lst[-1-i] == x]
[1, 4]
Or, as already suggested in other answers, use zip. However, it is enough to slice the first argument; the second one can just be the reversed list, as zip will stop once one of the argument lists is finished, making the code a bit shorter.
>>> [x for x, y in zip(lst[:(len(lst)+1)//2], reversed(lst)) if x == y]
In both approaches, (len(lst)+1)//2 is equivalent to int(math.ceil(len(lst)/2)).
maybe you want something like for even length of list:
>>> r=[l[i] for i in range(len(l)/2) if l[i]==l[-(i+1)]]
>>> r
[3]
>>> l=[1,5,7,7,8,1]
>>> r=[l[i] for i in range(len(l)/2) if l[i]==l[-(i+1)]]
>>> r
[1, 7]
And for odd length of list :
>>> l=[3,1,4,1,5]
>>> r=[l[i] for i in range(len(l)/2+1) if l[i]==l[-(i+1)]]
>>> r
[1, 4]
so you can create a function :
def myfunc(mylist):
if (len(mylist) % 2 == 0):
return [l[i] for i in range(len(l)/2) if l[i]==l[-(i+1)]]
else:
return [l[i] for i in range(len(l)/2+1) if l[i]==l[-(i+1)]]
and use it this way :
>>> l=[1,5,7,7,8,1]
>>> myfunc(l)
[1, 7]
>>> l=[3,1,4,1,5]
>>> myfunc(l)
[1, 4]
What you can do is zip over the first half and the second half reversed and use list comprehensions to build a list of the same ones:
[element_1 for element_1, element_2 in zip(l[:len(l)//2], reversed(l[(len(l)+1)//2:])) if element_1 == element_2]
What happens is that you take the first half and iterate over those as element_1, the second half reversed as element_2 and then only add them if they are the same:
l = [1, 2, 3, 3, 2, 4]
l[:len(l)//2] == [1, 2, 3]
reversed(l[(len(l)+1)//2:])) == [4, 2, 3]
1 != 4, 2 == 2, 3 == 3, result == [2, 3]
If you also want to include the middle element in the case of an odd list, we can just extend our lists to both include the middle element, which will always evaluate as the same:
[element_1 for element_1, element_2 in zip(l[:(len(l) + 1)//2], reversed(l[len(l)//2:])) if element_1 == element_2]
l = [3, 1, 4, 1, 5]
l[:len(l)//2] == [3, 1, 4]
reversed(l[(len(l)+1)//2:])) == [5, 1, 4]
3 != 5, 1 == 1, 4 == 4, result == [1, 4]
Here is my solution:
[el1 for (el1, el2) in zip(L[:len(L)//2+1], L[len(L)//2:][::-1]) if el1==el2]
There is a lot going on, so let me explain step by step:
L[:len(L)//2+1] is the first half of the list plus an extra element (which is useful for lists of odd lengths)
L[len(L)//2:][::-1] is the second half of the list, reversed ([::-1])
zip creates a list of pairs from two lists. it stops at the end of the shortest list. We use this in the case the length of the list is even, so the extra term in the first half is neglected
List comprehension essentially equivalent to a for loop, but useful to create a list "on the fly". It will return an element only if the if condition is true, otherwise it will pass.
You can easily modify the solution above if you are interested in the indexes (of the first half) where the match occurs:
[idx for idx, (el1, el2) in enumerate(zip(L[:len(L)//2+1], L[len(L)//2:][::-1])) if el1==el2]
You can use the following which leverages from zip_longest:
from itertools import zip_longest
def compare(lst):
size = len(lst) // 2
return [y for x, y in zip_longest(lst[:size], lst[-1:size-1:-1], fillvalue=None) if x == y or x is None]
print(compare([1, 5, 7, 7, 8, 1])) # [1, 7]
print(compare([3, 1, 4, 1, 5])) # [1, 4]
print(compare([2, 3, 5, 7, 1, 3, 5])) # [3, 7]
On zip_longest:
Normally, zip stops zipping when one of its iterators run out. zip_longest does not have that limitation and it simply keeps on zipping by adding dummy values.
Example:
list(zip([1, 2, 3], ['a'])) # [(1, 'a')]
list(zip_longest([1, 2, 3], ['a'], fillvalue='z')) # [(1, 'a'), (2, 'z'), (3, 'z')]
I asked the same thing yesterday but was finding a hard time finding the right sentence to describe my problem, so I deleted it. But here it is again.
Let us say that we have 3 lists:
list1 = [1, 2]
list2 = [2, 3]
list3 = [1]
Let us say I want to find the 3 permutations of these list, which when added together, it results in the smallest number possible. So here, the permutations that we want would be:
1,2,1
2,2,1
1,3,1
Because the sum of the numbers on each permutation creates the smallest numbers possible.
2,3,1
Will not be a part of the solution since the sum is larger than the other three, thus, not a part of the three smallest.
Of course, using itertools and list all the permutations, and add the numbers on each permutation would be the most obvious solution, but I was wondering if there is a more efficient algorithm for this? Considering It should be able to take 1000 lists.
NOTE: If the number of list is N, then i would need to find N permutations. Thus, if there are 3 lists, I find the 3 smallest permutations.
PRECONDITIONS:
-A part of the precondition is that all of these lists are sorted.
-The number of elements on all list is 2N-1, to deal with the case where only one list have more than 1 element.
-All of the lists are sorted from smallest.
Since the lists are sorted, the smallest element in each list is the first one, the sum of which gives us the "minimal sum permutation". Picking any element except from the first one is going to increase the sum value.
We start off by calculating the difference between element i and the first one for each list. For example, for the lists [1, 3, 4, 8] and [3, 9, 12, 15], these differences would be [2, 3, 7] and [6, 9, 12] respectively. We keep them separate in cost_lists, because they will be needed later on. But in cost_global, we pool them all together and by sorting them in ascending order, we find a solution where for all lists but one we choose the minimal value. To keep track which element from which list will give us the next minimum sum, we group the difference values with both the index of the list it comes from and which element in that list it is.
However, this is not a complete approach. It is possible, for example, that taking the next value from two lists incurs a smaller cost than taking the next value from one list. So, we have to search for the product of the combinations for k = 2, 3, ..., N. Doing that normally would result to N**N complexity, but we can take some really good shortcuts.
From the partial solution above, we have a list of the minimal costs in order. Since we want only the first N minimal sums, we check what the cost value of the Nth permutation is (threshold). So, when we search for a group of two next values, we can safely ignore their sum if it exceeds our current threshold. And since the difference values within lists are in ascending order, once we cross the threshold, we can instantly exit the loop. Similarly, if we haven't found any new combinations within the threshold for k = 2, it is pointless to look for k > 2. Considering that most likely the smallest sum costs will be the result of a single nonminimal value, or a few small ones (unless most lists have massive differences between sequential values), we are bound to exit these loops rather quickly. The code I came up to achieve this is fairly ugly, but it effectively does the same as
for k in xrange(2, len(lists)):
for comb in itertools.combinations(cost_lists, k):
for group in itertools.product(*comb):
if sum(g[0] for g in group) <= threshold:
cost_global.append(group)
except that we exit the loops as soon as we guarantee not to find any results, lest we pointlessly shift through an innumerable number of combinations/products which are over the threshold.
def filter_cost(cost_lists, threshold):
cost = [[i for i in ilist if i[0] <= threshold] for ilist in cost_lists]
# the algorithm requires that we remove any lists that have become empty
return [ilist for ilist in cost if ilist]
def _combi(cost_lists, k, start, depth, subtotal, threshold):
if depth == k:
for i in xrange(start, len(cost_lists)):
for value in cost_lists[i]:
if value[0] + subtotal > threshold:
break
yield (value,)
else:
for i in xrange(start, len(cost_lists)):
for value in cost_lists[i]:
if value[0] + subtotal > threshold:
break
for c in _combi(cost_lists, k, i+1, depth+1,
value[0]+subtotal, threshold):
yield (value,) + c
def combinations_product(cost_lists, k, threshold):
for i in xrange(len(cost_lists)-k+1):
for value in cost_lists[i]:
if value[0] > threshold:
break
for comb in _combi(cost_lists, k, i+1, 2, value[0], threshold):
temp = (value,) + comb
cost, ilists, ith_items = zip(*temp)
yield sum(cost), ilists, ith_items
def find_smallest_sum_permutations(lists):
minima = [min(x) for x in lists]
cost_local = []
cost_global = []
for i, ilist in enumerate(lists):
if len(ilist) > 1:
first = ilist[0]
diff = [(num-first, i, j) for j, num in enumerate(ilist[1:], 1)]
cost_local.append(diff)
cost_global.extend(diff)
cost_global.sort()
threshold_index = len(lists) - 2
cost_threshold = cost_global[threshold_index][0]
cost_local = filter_cost(cost_local, cost_threshold)
for k in xrange(2, len(lists)):
group_combinations = tuple(combinations_product(cost_local, k,
cost_threshold))
if group_combinations:
cost_global.extend(group_combinations)
cost_global.sort()
cost_threshold = cost_global[threshold_index][0]
cost_local = filter_cost(cost_local, cost_threshold)
else:
break
permutations = [minima]
for k in xrange(N-1):
_, ilist, ith_item = cost_global[k]
if type(ilist) == int:
permutation = [minima[i]
if i != ilist else lists[ilist][ith_item]
for i in xrange(N)]
else:
# multiple nonminimal values combination
mapping = dict(zip(ilist, ith_item))
permutation = [minima[i]
if i not in mapping else lists[i][mapping[i]]
for i in xrange(N)]
permutations.append(permutation)
return permutations
Examples
Example in the question.
>>> lists = [
[1, 2],
[2, 3],
[1],
]
>>> for p in find_smallest_sum_permutations(lists):
... print p, sum(p)
[1, 2, 1] 4
[2, 2, 1] 5
[1, 3, 1] 5
Example I had generated with random lists.
>>> import random
>>> N = 5
>>> random.seed(1024)
>>> lists = [sorted(random.sample(range(10*N), 2*N-1)) for _ in xrange(N)]
>>> for p in find_smallest_sum_permutations(lists):
... print p, sum(p)
[4, 4, 1, 6, 0] 15
[4, 6, 1, 6, 0] 17
[4, 4, 3, 6, 0] 17
[4, 4, 1, 6, 4] 19
[4, 6, 3, 6, 0] 19
Example by user2357112 which had caught a glaring error in my previous iteration.
>>> lists = [
[1, 2, 30, 40],
[1, 2, 30, 40],
[10, 20, 30, 40],
[10, 20, 30, 40],
]
>>> for p in find_smallest_sum_permutations(lists):
... print p, sum(p)
[1, 1, 10, 10] 22
[2, 1, 10, 10] 23
[1, 2, 10, 10] 23
[2, 2, 10, 10] 24
The trick is to only generate the combinations that might possibly be needed, and store them in a heap. Each one that you pull out is the smallest one you have not yet seen. And the fact that THAT combination has been pulled out tells you that there are new ones which might also be small.
See https://docs.python.org/2/library/heapq.html for how to use a heap. We also need code for generating combinations. And with that, here is working code for getting the first n combinations for any list of lists:
import heapq
# Helper class for storing combinations.
class ListSelector:
def __init__(self, lists, indexes):
self.lists = lists
self.indexes = indexes
def value(self):
answer = 0
for i in range(0, len(self.lists)):
answer = answer + self.lists[i][self.indexes[i]]
return answer
def values(self):
return [self.lists[i][self.indexes[i]] for i in range(0, len(self.lists))]
# These are the next combinations. We are willing to increment any
# leading 0, or the first non-zero value. This will provide one and
# only one path to each possible combination.
def next_selectors(self):
lists = self.lists
indexes = self.indexes
selectors = []
for i in range(0, len(lists)):
if len(lists[i]) <= indexes[i] + 1:
if 0 == indexes[i]:
continue
else:
break
new_indexes = [
indexes[j] + (0 if j != i else 1)
for j in range(0, len(lists))]
selectors.append(ListSelector(lists, new_indexes))
if 0 < indexes[i]:
break
return selectors
# This will just return an iterator over all combinations, from smallest
# to largest. It does NOT generate them until needed.
def combinations(lists):
sel = ListSelector(lists, [0 for _ in range(len(lists))])
upcoming = [(sel.value(), sel)]
while len(upcoming):
value, sel = heapq.heappop(upcoming)
yield sel
for next_sel in sel.next_selectors():
heapq.heappush(upcoming, (next_sel.value(), next_sel))
# This just gets the first n of them. (It will return less if less.)
def smallest_n_combinations(n, lists):
i = 0
for sel in combinations(lists):
yield sel
i = i + 1
if i == n:
break
# Example usage
lists = [
[1, 2, 5],
[2, 3, 4],
[1]]
for sel in smallest_n_combinations(3, lists):
print(sel.value(), sel.values(), sel.indexes)
(This could be made more efficient for a long list of lists with tricks like caching the value inside of ListSelector and calculating it incrementally for new ones.)