Generate itertools.product in different order - python

I have some sorted/scored lists of parameters. I'd like to generate possible combinations of parameters (cartesian product). However, if the number of parameters is large, this quickly (very quickly!!) becomes a very large number. Basically, I'd like to do a cartesian product, but stop early.
import itertools
parameter_options = ['1234',
'123',
'1234']
for parameter_set in itertools.product(*parameter_options):
print ''.join(parameter_set)
generates:
111
112
113
114
121
122
123
124
131
132
133
134
...
I'd like to generate (or something similar):
111
112
121
211
122
212
221
222
...
So that if I stop early, I'd at least get a couple of "good" sets of parameters, where a good set of parameters comes mostly early from the lists. This particular order would be fine, but I am interested in any technique that changes the "next permutation" choice order. I'd like the early results generated to have most items from the front of the list, but don't really care whether a solution generates 113 or 122 first, or whether 211 or 112 comes first.
My plan is to stop after some number of permutations are generated (maybe 10K or so? Depends on results). So if there are fewer than the cutoff, all should be generated, ultimately. And preferably each generated only once.

I think you can get your results in the order you want if you think of the output in terms of a graph traversal of the output space. You want a nearest-first traversal, while the itertools.product function is a depth-first traversal.
Try something like this:
import heapq
def nearest_first_product(*sequences):
start = (0,)*len(sequences)
queue = [(0, start)]
seen = set([start])
while queue:
priority, indexes = heapq.heappop(queue)
yield tuple(seq[index] for seq, index in zip(sequences, indexes))
for i in range(len(sequences)):
if indexes[i] < len(sequences[i]) - 1:
lst = list(indexes)
lst[i] += 1
new_indexes = tuple(lst)
if new_indexes not in seen:
new_priority = sum(index * index for index in new_indexes)
heapq.heappush(queue, (new_priority, new_indexes))
seen.add(new_indexes)
Example output:
for tup in nearest_first_product(range(1, 5), range(1, 4), range(1, 5)):
print(tup)
(1, 1, 1)
(1, 1, 2)
(1, 2, 1)
(2, 1, 1)
(1, 2, 2)
(2, 1, 2)
(2, 2, 1)
(2, 2, 2)
(1, 1, 3)
(1, 3, 1)
(3, 1, 1)
(1, 2, 3)
(1, 3, 2)
(2, 1, 3)
(2, 3, 1)
(3, 1, 2)
(3, 2, 1)
(2, 2, 3)
(2, 3, 2)
(3, 2, 2)
(1, 3, 3)
(3, 1, 3)
(3, 3, 1)
(1, 1, 4)
(2, 3, 3)
(3, 2, 3)
(3, 3, 2)
(4, 1, 1)
(1, 2, 4)
(2, 1, 4)
(4, 1, 2)
(4, 2, 1)
(2, 2, 4)
(4, 2, 2)
(3, 3, 3)
(1, 3, 4)
(3, 1, 4)
(4, 1, 3)
(4, 3, 1)
(2, 3, 4)
(3, 2, 4)
(4, 2, 3)
(4, 3, 2)
(3, 3, 4)
(4, 3, 3)
(4, 1, 4)
(4, 2, 4)
(4, 3, 4)
You can get a bunch of slightly different orders by changing up the calculation of new_priority in the code. The current version uses squared Cartesian distance as the priorities, but you could use some other value if you wanted to (for instance, one that incorporates the values from the sequences, not only the indexes).
If you don't care too much about whether (1, 1, 3) comes before (1, 2, 2) (so long as they both come after (1, 1, 2), (1, 2, 1) and (2, 1, 1)), you could probably do a breadth-first traversal instead of nearest-first. This would be a bit simpler, as you could use a regular queue (like a collections.deque) rather than a priority queue.
The queues used by this sort of graph traversal mean that this code uses some amount of memory. However, the amount of memory is a lot less than if you had to produce the results all up front before putting them in order. The maximum memory used is proportional to the surface area of the result space, rather than its volume.

Your question is a bit ambigous, but reading your comments and another answers, it seems you want a cartesian product implementation that does a breadth search instead of a depth search.
Recently I had your same need, but also with the requirement that it doesn't store intermediate results in memory. This is very important to me because I am working with large number of parameters (thus a extremely big cartesian product) and any implementation that stores values or do recursive calls is non-viable. As you state in your question, this seems to be your case also.
As I didn't find an answer that fulfils this requirement, I came to this solution:
def product(*sequences):
'''Breadth First Search Cartesian Product'''
# sequences = tuple(tuple(seq) for seqin sequences)
def partitions(n, k):
for c in combinations(range(n+k-1), k-1):
yield (b-a-1 for a, b in zip((-1,)+c, c+(n+k-1,)))
max_position = [len(i)-1 for i in sequences]
for i in range(sum(max_position)):
for positions in partitions(i, len(sequences)):
try:
yield tuple(map(lambda seq, pos: seq[pos], sequences, positions))
except IndexError:
continue
yield tuple(map(lambda seq, pos: seq[pos], sequences, max_position))
In terms of speed, this generator works fine in the beginning but starts getting slower in the latest results. So, although this implementation is a bit slower it works as a generator that doesn't use memory and doesn't give repeated values.
As I mentioned in #Blckknght answer, parameters here also must be sequences (subscriptable and length-defined iterables). But you can also bypass this limitation (sacrificing a bit of memory) by uncommenting the first line. This may be useful if you are working with generators/iterators as parameters.
I hope I've helped you and let me know if this helps to your problem.

This solution possibly isn't the best as it forces every combination into memory briefly, but it does work. It just might take a little while for large data sets.
import itertools
import random
count = 100 # the (maximum) amount of results
results = random.sample(list(itertools.product(*parameter_options)), count)
for parameter_set in results:
print "".join(parameter_set)
This will give you a list of products in a random order.

Related

Creating list of values with according to condition in python

I need to create a list of jobs respecting precedence relationships stated by a dictionary.
dict_preced = {(1, 2): 0, (1, 3): 0, (2, 1): 1, (2, 3): 0, (3, 1): 1, (3, 2): 0}
Where (j1, j2) == 1 means that j1 requires j2, 0 otherwise.
Supposing I already have starting list: j_seq = [3, 2, 1], I need to create a new_list in which all values from j_seq will respect precedence relationship, meaning that there is no job being executed before a required job. (i.e., job 3 and job 2 cannot be executed before job 1).
Therefore, there are many candidate lists (i.e., new_list = [1, 2, 3] or new_list = [1, 3, 2]).
How to create samples of new_list that will always respect these precedence relationships?
I found many examples of list comprehension when each value need to respect a given condition with no dependences with other values. But I did not find any examples in which the condition stated concerns two values of the same list.
EDIT: I do not need to get all permutations respecting precedence constraints, just one is enough.
One solution would be to enumerate all the permutations of j_seq, and then do a lookup of each pair combination against your precedence dictionary to identify permutations that were invalid and could be thrown out.
For example:
import itertools
dict_preced = {(1, 2): 0, (1, 3): 0, (2, 1): 1, (2, 3): 0, (3, 1): 1, (3, 2): 0}
j_seq = [3, 2, 1]
valid = []
for perm in itertools.permutations(j_seq):
print('Permutation:', perm)
for perm_pair in itertools.combinations(perm, 2):
precedence = dict_preced.get(perm_pair, 0)
print('\tCombination:', perm_pair, '=>', precedence)
if precedence == 1:
print('\tDecision: exclude', perm)
break
else:
print('\tDecision: include', perm)
valid.append(perm)
print('Result:', valid)
The result is: [(1, 3, 2), (1, 2, 3)].
The complete output (including debug logs) is:
Permutation: (3, 2, 1)
Combination: (3, 2) => 0
Combination: (3, 1) => 1
Decision: exclude (3, 2, 1)
Permutation: (3, 1, 2)
Combination: (3, 1) => 1
Decision: exclude (3, 1, 2)
Permutation: (2, 3, 1)
Combination: (2, 3) => 0
Combination: (2, 1) => 1
Decision: exclude (2, 3, 1)
Permutation: (2, 1, 3)
Combination: (2, 1) => 1
Decision: exclude (2, 1, 3)
Permutation: (1, 3, 2)
Combination: (1, 3) => 0
Combination: (1, 2) => 0
Combination: (3, 2) => 0
Decision: include (1, 3, 2)
Permutation: (1, 2, 3)
Combination: (1, 2) => 0
Combination: (1, 3) => 0
Combination: (2, 3) => 0
Decision: include (1, 2, 3)
Result: [(1, 3, 2), (1, 2, 3)]
I am answering my own question because I found a solution to this problem.
When using short length lists, the method proposed by #jarmod can be handy. However, when long lists are considered, it becomes impracticable to use his method because the number of permutations arises to 10^18.
For those having access to the CP optimizer from IBM CPLEX (or other constraint programming solver), a possible alternative is using the constraint propagation method to obtain some lists of valid sequences (respecting the precedence constraints).
In my case, I called the CP optimizer through the docplex module.
You can check out the script here: https://github.com/campioni1/CPO_Docplex_precedence_constraints
Please, let me know if you encounter any difficulties,

create list of adjacent elements of another list in Python

I am looking to take as input a list and then create another list which contains tuples (or sub-lists) of adjacent elements from the original list, wrapping around for the beginning and ending elements. The input/output would look like this:
l_in = [0, 1, 2, 3]
l_out = [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
My question is closely related to another titled getting successive adjacent elements of a list, but this other question does not take into account wrapping around for the end elements and only handles pairs of elements rather than triplets.
I have a somewhat longer approach to do this involving rotating deques and zipping them together:
from collections import deque
l_in = [0, 1, 2, 3]
deq = deque(l_in)
deq.rotate(1)
deq_prev = deque(deq)
deq.rotate(-2)
deq_next = deque(deq)
deq.rotate(1)
l_out = list(zip(deq_prev, deq, deq_next))
# l_out is [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
However, I feel like there is probably a more elegant (and/or efficient) way to do this using other built-in Python functionality. If, for instance, the rotate() function of deque returned the rotated list instead of modifying it in place, this could be a one- or two-liner (though this approach of zipping together rotated lists is perhaps not the most efficient). How can I accomplish this more elegantly and/or efficiently?
One approach may be to use itertools combined with more_itertools.windowed:
import itertools as it
import more_itertools as mit
l_in = [0, 1, 2, 3]
n = len(l_in)
list(it.islice(mit.windowed(it.cycle(l_in), 3), n-1, 2*n-1))
# [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
Here we generated an infinite cycle of sliding windows and sliced the desired subset.
FWIW, here is an abstraction of the latter code for a general, flexible solution given any iterable input e.g. range(5), "abcde", iter([0, 1, 2, 3]), etc.:
def get_windows(iterable, size=3, offset=-1):
"""Return an iterable of windows including an optional offset."""
it1, it2 = it.tee(iterable)
n = mit.ilen(it1)
return it.islice(mit.windowed(it.cycle(it2), size), n+offset, 2*n+offset)
list(get_windows(l_in))
# [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
list(get_windows("abc", size=2))
# [('c', 'a'), ('a', 'b'), ('b', 'c')]
list(get_windows(range(5), size=2, offset=-2))
# [(3, 4), (4, 0), (0, 1), (1, 2), (2, 3)]
Note: more-itertools is a separate library, easily installed via:
> pip install more_itertools
This can be done with slices:
l_in = [0, 1, 2, 3]
l_in = [l_in[-1]] + l_in + [l_in[0]]
l_out = [l_in[i:i+3] for i in range(len(l_in)-2)]
Well, or such a perversion:
div = len(l_in)
n = 3
l_out = [l_in[i % div: i % div + 3]
if len(l_in[i % div: i % div + 3]) == 3
else l_in[i % div: i % div + 3] + l_in[:3 - len(l_in[i % div: i % div + 3])]
for i in range(3, len(l_in) + 3 * n + 2)]
You can specify the number of iterations.
Well I figured out a better solution as I was writing the question, but I already went through the work of writing it, so here goes. This solution is at least much more concise:
l_out = list(zip(l_in[-1:] + l_in[:-1], l_in, l_in[1:] + l_in[:1]))
See this post for different answers on how to rotate lists in Python.
The one-line solution above should be at least as efficient as the solution in the question (based on my understanding) since the slicing should not be more expensive than the rotating and copying of the deques (see https://wiki.python.org/moin/TimeComplexity).
Other answers with more efficient (or elegant) solutions are still welcome though.
as you found there is a list rotation slicing based idiom lst[i:] + lst[:i]
using it inside a comprehension taking a variable n for the number of adjacent elements wanted is more general [lst[i:] + lst[:i] for i in range(n)]
so everything can be parameterized, the number of adjacent elements n in the cyclic rotation and the 'phase' p, the starting point if not the 'natural' 0 base index, although the default p=-1 is set to -1 to fit the apparant desired output
tst = list(range(4))
def rot(lst, n, p=-1):
return list(zip(*([lst[i+p:] + lst[:i+p] for i in range(n)])))
rot(tst, 3)
Out[2]: [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
showing the shortend code as per the comment

Quickest way to remove mirror opposites from a list

Say I have a list of tuples [(0, 1, 2, 3), (4, 5, 6, 7), (3, 2, 1, 0)], I would like to remove all instances where a tuple is reversed e.g. removing (3, 2, 1, 0) from the above list.
My current (rudimentary) method is:
L = list(itertools.permutations(np.arange(x), 4))
for ll in L:
if ll[::-1] in L:
L.remove(ll[::-1])
Where time taken increases exponentially with increasing x. So if x is large this takes ages! How can I speed this up?
Using set comes to mind:
L = set()
for ll in itertools.permutations(np.arange(x), 4):
if ll[::-1] not in L:
L.add(ll)
or even, for slightly better performance:
L = set()
for ll in itertools.permutations(np.arange(x), 4):
if ll not in L:
L.add(ll[::-1])
The need to keep the first looks like it forces you to iterate with a contitional.
a = [(0, 1, 2, 3), (4, 5, 6, 7), (3, 2, 1, 0)]
s = set(); a1 = []
for t in a:
if t not in s:
a1.append(t)
s.add(t[::-1])
Edit: The accepted answer addresses the example code (i.e. the itertools permutations sample). This answers the generalized question for any list (or iterable).

Generating all possible combinations of a list, "itertools.combinations" misses some results

Given a list of items in Python, how can I get all the possible combinations of the items?
There are several similar questions on this site, that suggest using itertools.combinations, but that returns only a subset of what I need:
stuff = [1, 2, 3]
for L in range(0, len(stuff)+1):
for subset in itertools.combinations(stuff, L):
print(subset)
()
(1,)
(2,)
(3,)
(1, 2)
(1, 3)
(2, 3)
(1, 2, 3)
As you see, it returns only items in a strict order, not returning (2, 1), (3, 2), (3, 1), (2, 1, 3), (3, 1, 2), (2, 3, 1), and (3, 2, 1). Is there some workaround for that? I can't seem to come up with anything.
Use itertools.permutations:
>>> import itertools
>>> stuff = [1, 2, 3]
>>> for L in range(0, len(stuff)+1):
for subset in itertools.permutations(stuff, L):
print(subset)
...
()
(1,)
(2,)
(3,)
(1, 2)
(1, 3)
(2, 1)
(2, 3)
(3, 1)
....
Help on itertools.permutations:
permutations(iterable[, r]) --> permutations object
Return successive r-length permutations of elements in the iterable.
permutations(range(3), 2) --> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1)
You can generate all the combinations of a list in python using this simple code
import itertools
a = [1,2,3,4]
for i in xrange(1,len(a)+1):
print list(itertools.combinations(a,i))
Result:
[(1,), (2,), (3,), (4,)]
[(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)]
[(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)]
[(1, 2, 3, 4)]
Are you looking for itertools.permutations instead?
From help(itertools.permutations),
Help on class permutations in module itertools:
class permutations(__builtin__.object)
| permutations(iterable[, r]) --> permutations object
|
| Return successive r-length permutations of elements in the iterable.
|
| permutations(range(3), 2) --> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1)
Sample Code :
>>> from itertools import permutations
>>> stuff = [1, 2, 3]
>>> for i in range(0, len(stuff)+1):
for subset in permutations(stuff, i):
print(subset)
()
(1,)
(2,)
(3,)
(1, 2)
(1, 3)
(2, 1)
(2, 3)
(3, 1)
(3, 2)
(1, 2, 3)
(1, 3, 2)
(2, 1, 3)
(2, 3, 1)
(3, 1, 2)
(3, 2, 1)
From Wikipedia, the difference between permutations and combinations :
Permutation :
Informally, a permutation of a set of objects is an arrangement of those objects into a particular order. For example, there are six permutations of the set {1,2,3}, namely (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), and (3,2,1).
Combination :
In mathematics a combination is a way of selecting several things out of a larger group, where (unlike permutations) order does not matter.
itertools.permutations is going to be what you want. By mathematical definition, order does not matter for combinations, meaning (1,2) is considered identical to (2,1). Whereas with permutations, each distinct ordering counts as a unique permutation, so (1,2) and (2,1) are completely different.
Here is a solution without itertools
First lets define a translation between an indicator vector of 0 and 1s and a sub-list (1 if the item is in the sublist)
def indicators2sublist(indicators,arr):
return [item for item,indicator in zip(arr,indicators) if int(indicator)==1]
Next, Well define a mapping from a number between 0 and 2^n-1 to the its binary vector representation (using string's format function) :
def bin(n,sz):
return ('{d:0'+str(sz)+'b}').format(d=n)
All we have left to do, is to iterate all the possible numbers, and call indicators2sublist
def all_sublists(arr):
sz=len(arr)
for n in xrange(0,2**sz):
b=bin(n,sz)
yield indicators2sublist(b,arr)
I assume you want all possible combinations as 'sets' of values. Here is a piece of code that I wrote that might help give you an idea:
def getAllCombinations(object_list):
uniq_objs = set(object_list)
combinations = []
for obj in uniq_objs:
for i in range(0,len(combinations)):
combinations.append(combinations[i].union([obj]))
combinations.append(set([obj]))
return combinations
Here is a sample:
combinations = getAllCombinations([20,10,30])
combinations.sort(key = lambda s: len(s))
print combinations
... [set([10]), set([20]), set([30]), set([10, 20]), set([10, 30]), set([20, 30]), set([10, 20, 30])]
I think this has n! time complexity, so be careful. This works but may not be most efficient
just thought i'd put this out there since i couldn't fine EVERY possible outcome and keeping in mind i only have the rawest most basic of knowledge when it comes to python and there's probably a much more elegant solution...(also excuse the poor variable names
testing = [1, 2, 3]
testing2= [0]
n = -1
def testingSomethingElse(number):
try:
testing2[0:len(testing2)] == testing[0]
n = -1
testing2[number] += 1
except IndexError:
testing2.append(testing[0])
while True:
n += 1
testing2[0] = testing[n]
print(testing2)
if testing2[0] == testing[-1]:
try:
n = -1
testing2[1] += 1
except IndexError:
testing2.append(testing[0])
for i in range(len(testing2)):
if testing2[i] == 4:
testingSomethingElse(i+1)
testing2[i] = testing[0]
i got away with == 4 because i'm working with integers but you may have to modify that accordingly...

How to take M things N at a time

I have a list of 46 items. Each has a number associated with it. I want to pair these items up in a set of 23 pairs. I want to evaluate a function over each set. How do I generate such a set?
I can use the combinations function from itertools to produce all the 2-ples but I don't see how to generate all the sets of 23 pairs.
How do I do this or is there sample code I can reference?
>>> L=range(46)
>>> def f(x, y): #for example
... return x * y
...
>>> [f(x, y) for x, y in zip(*[iter(L)] * 2)]
[0, 6, 20, 42, 72, 110, 156, 210, 272, 342, 420, 506, 600, 702, 812, 930, 1056, 1190, 1332, 1482, 1640, 1806, 1980]
Edit:
For the powerset of the pairs, we start by creating the pairs the same way. For Python3 use range in place of xrange
S = zip(*[iter(L)] * 2) # set of 23 pairs
[{j for i, j in enumerate(S) if (1<<i)&k} for k in xrange(1<<len(S))]
This will be quite a big list, you may want to use a generator expression
for item in ({j for i, j in enumerate(S) if (1<<i)&k} for k in xrange(1<<len(S))):
func(item)
First, the natural way to get all the pairs from a list is:
>>> N = 10
>>> input_list = range(N)
>>> [(a,b) for a, b in zip(input_list[::2], input_list[1::2])]
[(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)]
If you want to generate all such pairs, I'd do something like (this is what I call Case 1 below):
>>> set_of_all_pairs = set()
>>> input_list = range(N)
>>> import itertools
>>> for perm in itertools.permutations(input_list):
pairs = tuple([(a,b) for a, b in zip(perm[::2], perm[1::2])])
set_of_all_pairs.add(pairs)
Granted this as is will differentiate order in pair (e.g., (1,4) is different than (4,1)) as well as consider the order of pairs meaningful. So if you sort the pairs and the set of pairs before adding to the set:
>>> set_of_all_pairs = set()
>>> input_list = range(N)
>>> import itertools
>>> for perm in itertools.permutations(input_list):
pairs = sorted([tuple(sorted((a,b))) for a, b in zip(perm[::2], perm[1::2])])
set_of_all_pairs.add(tuple(pairs))
This is not an efficient algorithm (what I call Case 3 below), but for small values of N it will work.
For N=6, using the sorted method.
set([((0, 4), (1, 3), (2, 5)),
((0, 4), (1, 5), (2, 3)),
((0, 1), (2, 3), (4, 5)),
((0, 3), (1, 5), (2, 4)),
((0, 2), (1, 5), (3, 4)),
((0, 4), (1, 2), (3, 5)),
((0, 3), (1, 4), (2, 5)),
((0, 1), (2, 4), (3, 5)),
((0, 5), (1, 4), (2, 3)),
((0, 5), (1, 2), (3, 4)),
((0, 2), (1, 3), (4, 5)),
((0, 3), (1, 2), (4, 5)),
((0, 2), (1, 4), (3, 5)),
((0, 1), (2, 5), (3, 4)),
((0, 5), (1, 3), (2, 4))])
Note the solution space grows exponentially fast; (e.g., for N=6 its 15; N=8 its 105; N=10, its 945, for N=46 will be 25373791335626257947657609375 ~ 2.5 x 1028).
EDIT: People criticized the O(N!), but the desired solution grows as O(N!)
The question asks to break a list of N elements (assuming most general case of all elements being distinct) into a set of (N/2) pairs, and not only do this once, but generate all sets of these pairings. This answer is the only one that does so. Yes, it's exponentially slow, and completely infeasible for N=46. That's why I used N=10.
There are three reasonable interpretations of the problem:
Case 1: Ordering matters both inside a pair in the tuple (e.g., function arguments are not symmetric) and in the order of the pairs in a set of pairs also matters, then we will have N! ways of pairing up the numbers in our answer. Meaning in this case both the pair (0,1) and (1,0) are consider distinct, as well as for the N=4 case we consider the pairings {(0,1), (2,3)} distinct from {(2,3),(0,1)}.
Case 2: Ordering matters in a pair, but order is irrelevant in a set of pairings. This means we consider (0,1) and (1,0) as distinct pairs, but consider (for the N=4 case) that the set {(0,1),(2,3)} is identical to the set {(2,3), (0,1)} and do not need to consider both. In this case we will have N!/(N/2)! pairings, as any given set has (N/2)! different orderings. (I didn't explicitly give this above; but just stop sorting the tuple).
Case 3: Ordering is irrelevant both within a pair and within a set of pairings. This means we consider (0,1) and (1,0) as the same pair (function arguments are symmetric), so we will have N!/( (N/2)! & 2^(N/2) ) sets of pairs (factorial(N)/(factorial(N/2)*2**(N/2))). Each of the (N/2) pairs in each combination will have two internal orderings that contribute.
So depending on how the problem is phrased we should have:
Case 1 | Case 2 | Case 3
----------------------------------------------
N N! | N!/(N/2)! | N!/((N/2)! 2^(N/2))
6 720 | 120 | 15
8 40320 | 1680 | 105
10 3628800 | 30240 | 945
46 5.5x10^57 | 2.1x10^35 | 2x10^28
Note, my algorithm will go through all permutations, and hence will actually run slower for Case 3 (due to sorting) than Case 1, even though a better algorithm for Case 3 could be much faster. However, my answer is still optimal in asymptotic notation as even case 3 is factorial in its asymptotic running time, and completely infeasible to solve for N~46. Granted if you had to do a problem-size at the limit of feasibility (N~16) for Case 3 (e.g., need to generate 518918400.0 pairings), this solution of iterating through all N! permutations, sorting, and throwing out duplicates is sub-optimal.

Categories

Resources