I need to find the length of the longest combination of pairs that can be made from a list of pairs, without any common elements.
For example the following list of pairs:
[(A, B), (A, D), (B, C), (B, D), (C, D)]
Would have these combinations:
[(A, B), (C, D)]
[(A, D), (B, C)]
[(B, D)]
And so the longest combination would be 2 pairs in length.
This needs to be able to handle up to several thousand pairs so generating all possible combinations of pairs at each possible length and checking for overlaps would not work.
However, the total number of unique elements across all pairs is capped at 100, so the longest possible combination that could be encountered would be 50 pairs.
Is there an efficient way to do this?
okay this is what I have, maybe not the best but its something
so Combo initializes any 2 pairs, and feeds it to Combine along with the rest of the array not check yet
Combine takes an the leftover array, the current combo and a list of used elements, then check each possible combination, if the check tuple from the leftover array has any elements in the used list, it skips it, if it doesnt, it adds it to the combo and passes it to a further recursed Combine until its as long as it can be
arr = [('A', 'B'), ('A', 'D'), ('B', 'C'), ('B', 'D'), ('E', 'D'), ("A",'F'),('J','K'),('M','K'),('K','D'),('B','F')]
def Combo(arr):
combos = []
for i, tup1 in enumerate(arr):
combo = [tup1]
used = [tup1[0], tup1[1]]
for j, tup2 in enumerate(arr[i:]):
if (tup2[0] in used) or (tup2[1] in used):
continue
else:
for el in tup2:
used.append(el)
combo.append(tup2)
combo=Combine(arr[j:], combo, used)
combos.append(combo)
return combos
def Combine(arr, combo, used):
if arr==[]:
return combo
for i, tup in enumerate(arr):
unique = True
for el in tup:
if el in used:
unique = False
continue
if unique:
combo.append(tup)
for el in tup:
used.append(el)
return Combine(arr[i:], combo, used)
return combo
Combo(arr)
OUTPUT
[[('A', 'B'), ('E', 'D'), ('J', 'K')],
[('A', 'D'), ('B', 'C'), ('J', 'K')],
[('B', 'C'), ('E', 'D'), ('A', 'F'), ('J', 'K')],
[('B', 'D'), ('A', 'F'), ('J', 'K')],
[('E', 'D'), ('A', 'F'), ('B', 'C'), ('J', 'K')],
[('A', 'F'), ('J', 'K'), ('B', 'C'), ('E', 'D')],
[('J', 'K'), ('B', 'F'), ('E', 'D')],
[('M', 'K'), ('B', 'F'), ('E', 'D')],
[('K', 'D'), ('B', 'F')]]
as far as I know this should give you each unique combination in a list
Rephrasing the question, we want to find the biggest set of non-overlapping elements of pairs. Probably not the best solution but should work:
def process(pairs):
output = {}
max_length = 0
for i in range(len(pairs)):
curr = 1
output[pairs[i]] = set(pairs[i])
rest = pairs[:i] + pairs[i + 1:]
for j in range(len(rest)):
subset = output[pairs[i]] | set(rest[j])
if len(subset) == len(output[pairs[i]]) + 2:
curr += 1
output[pairs[i]] = subset
max_length = max(curr, max_length)
return max_length
We populate our initial set with the current pair and then if the next pair's elements are not presented in the current set we extend it. We continue this process until we checked all remaining pairs. I used this function for testing:
import random, timeit
def get_random_pairs(num):
return [(random.choice(string.ascii_uppercase), random.choice(string.ascii_uppercase)) for _ in range(num)]
print(timeit.timeit('process(pairs)', number=5, setup="from __main__ import process,get_random_pairs; pairs = get_random_pairs(3000)")/5)
On my machine (Intel i7-9750H (12) # 4.500GHz) it takes about 5-6 seconds to process 3000 pairs.
Related
By nested 2-tuples, I mean something like this: ((a,b),(c,(d,e))) where all tuples have two elements. I don't need different orderings of the elements, just the different ways of putting parentheses around them. For items = [a, b, c, d], there are 5 unique pairings, which are:
(((a,b),c),d)
((a,(b,c)),d)
(a,((b,c),d))
(a,(b,(c,d)))
((a,b),(c,d))
In a perfect world I'd also like to have control over the maximum depth of the returned tuples, so that if I generated all pairings of items = [a, b, c, d] with max_depth=2, it would only return ((a,b),(c,d)).
This problem turned up because I wanted to find a way to generate the results of addition on non-commutative, non-associative numbers. If a+b doesn't equal b+a, and a+(b+c) doesn't equal (a+b)+c, what are all the possible sums of a, b, and c?
I have made a function that generates all pairings, but it also returns duplicates.
import itertools
def all_pairings(items):
if len(items) == 2:
yield (*items,)
else:
for i, pair in enumerate(itertools.pairwise(items)):
for pairing in all_pairings(items[:i] + [pair] + items[i+2:]):
yield pairing
For example, it returns ((a,b),(c,d)) twice for items=[a, b, c, d], since it pairs up (a,b) first in one case and (c,d) first in the second case.
Returning duplicates becomes a bigger and bigger problem for larger numbers of items. With duplicates, the number of pairings grows factorially, and without duplicates it grows exponentially, according to the Catalan Numbers (https://oeis.org/A000108).
n
With duplicates: (n-1)!
Without duplicates: (2(n-1))!/(n!(n-1)!)
1
1
1
2
1
1
3
2
2
4
6
5
5
24
14
6
120
42
7
720
132
8
5040
429
9
40320
1430
10
362880
4862
Because of this, I have been trying to come up with an algorithm that doesn't need to search through all the possibilities, only the unique ones. Again, it would also be nice to have control over the maximum depth, but that could probably be added to an existing algorithm. So far I've been unsuccessful in coming up with an approach, and I also haven't found any resources that cover this specific problem. I'd appreciate any help or links to helpful resources.
Using a recursive generator:
items = ['a', 'b', 'c', 'd']
def split(l):
if len(l) == 1:
yield l[0]
for i in range(1, len(l)):
for a in split(l[:i]):
for b in split(l[i:]):
yield (a, b)
list(split(items))
Output:
[('a', ('b', ('c', 'd'))),
('a', (('b', 'c'), 'd')),
(('a', 'b'), ('c', 'd')),
(('a', ('b', 'c')), 'd'),
((('a', 'b'), 'c'), 'd')]
Check of uniqueness:
assert len(list(split(list(range(10))))) == 4862
Reversed order of the items:
items = ['a', 'b', 'c', 'd']
def split(l):
if len(l) == 1:
yield l[0]
for i in range(len(l)-1, 0, -1):
for a in split(l[:i]):
for b in split(l[i:]):
yield (a, b)
list(split(items))
[((('a', 'b'), 'c'), 'd'),
(('a', ('b', 'c')), 'd'),
(('a', 'b'), ('c', 'd')),
('a', (('b', 'c'), 'd')),
('a', ('b', ('c', 'd')))]
With maxdepth:
items = ['a', 'b', 'c', 'd']
def split(l, maxdepth=None):
if len(l) == 1:
yield l[0]
elif maxdepth is not None and maxdepth <= 0:
yield tuple(l)
else:
for i in range(1, len(l)):
for a in split(l[:i], maxdepth=maxdepth and maxdepth-1):
for b in split(l[i:], maxdepth=maxdepth and maxdepth-1):
yield (a, b)
list(split(items))
# or
list(split(items, maxdepth=3))
# or
list(split(items, maxdepth=2))
[('a', ('b', ('c', 'd'))),
('a', (('b', 'c'), 'd')),
(('a', 'b'), ('c', 'd')),
(('a', ('b', 'c')), 'd'),
((('a', 'b'), 'c'), 'd')]
list(split(items, maxdepth=1))
[('a', ('b', 'c', 'd')),
(('a', 'b'), ('c', 'd')),
(('a', 'b', 'c'), 'd')]
list(split(items, maxdepth=0))
[('a', 'b', 'c', 'd')]
Full-credit to mozway for the algorithm - my original idea was to represent the pairing in reverse-polish notation, which would not have lent itself to the following optimizations:
First, we replace the two nested loops:
for a in split(l[:i]):
for b in split(l[i:]):
yield (a, b)
-with itertools.product, which will itself cache the results of the inner split(...) call, as well as produce the pairing in internal C code, which will run much faster.
yield from product(split(l[:i]), split(l[i:]))
Next, we cache the results of the previous split(...) calls. To do this we must sacrifice the laziness of generators, as well as ensure that our function parameters are hashable. Explicitly, this means creating a wrapper that casts the input list to a tuple, and to modify the function body to return lists instead of yielding.
def split(l):
return _split(tuple(l))
def _split(l):
if len(l) == 1:
return l[:1]
res = []
for i in range(1, len(l)):
res.extend(product(_split(l[:i]), _split(l[i:])))
return res
We then decorate the function with functools.cache, to perform the caching. So putting it all together:
from itertools import product
from functools import cache
def split(l):
return _split(tuple(l))
#cache
def _split(l):
if len(l) == 1:
return l[:1]
res = []
for i in range(1, len(l)):
res.extend(product(_split(l[:i]), _split(l[i:])))
return res
Testing for following input-
test = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']`
-produces the following timings:
Original: 5.922573089599609
Revised: 0.08888077735900879
I did also verify that the results matched the original exactly- order and all.
Again, full credit to mozway for the algorithm. I've just applied a few optimizations to speed it up a bit.
Given a list of letters, say L=['a','b','c','d','e','f'] and a list of tuples, for example T=[('a','b'),('a','c'),('b','c')].
Now I want to create the maximum amount of possible tuples from the list of L that are not contained in T already. This needs to be done without duplicates, i.e. (a,b) would be the same as (b,a). Also, each letter can only be matched with one other letter.
My idea was:
#create a List of all possible tuples first:
all_tuples = [(x,y) for x in L for y in L if x!=y]
#now remove duplicates
unique_tuples = list(set([tuple(sorted(elem)) for elem in all_tuples]))
#Now, create a new set that matches each letter only once with another letter
visited=set()
output = []
for letter1, letter2 in unique tuples:
if ((letter1, letter2) or (letter2, letter1)) in T:
continue
if not letter1 in visited and not letter2 in visited:
visited.add(letter1)
visited.add(letter2)
output.append((letter1,letter2))
print(output)
However, this does not always give the maximum amount of possible tuples, depending on what T is. For example, let's say we extract the possible unique_tuples=[('a','b'),('a','d'),('b','c')].
If we append ('a','b') first to our output, we cannot append ('b','c') anymore, since 'b' was matched already. However, if we appended ('a','d') first, we could also get ('b','c') afterwards and get the maximum amount of two tuples.
How can one solve this?
If we ignore the business about not matching the same letter twice, this is a straightforward use of combinations:
>>> from itertools import combinations
>>> L=['a','b','c','d','e','f']
>>> T=[('a','b'),('a','c'),('b','c')]
>>> [t for t in combinations(L, 2) if t not in T]
[('a', 'd'), ('a', 'e'), ('a', 'f'), ('b', 'd'), ('b', 'e'), ('b', 'f'), ('c', 'd'), ('c', 'e'), ('c', 'f'), ('d', 'e'), ('d', 'f'), ('e', 'f')]
If we limit ourselves to only using each letter once, the problem is very straightforward, because we know that we can only have (letters / 2) tuples at most. Just find the available letters (by subtracting those already present in T) and then pair them up in any arbitrary order.
>>> used_letters = {c for t in T for c in t}
>>> free_letters = [c for c in L if c not in used_letters]
>>> [tuple(free_letters[i:i+2]) for i in range(0, 2 * (len(free_letters) // 2), 2)]
[('d', 'e')]
Without using libraries, you could do it like this:
L=['a','b','c','d','e','f']
T=[('a','b'),('a','c'),('b','c')]
L = sorted(L,key=lambda c: -sum(c in t for t in T))
used = set()
r = [(a,b) for i,a in enumerate(L) for b in L[i+1:]
if (a,b) not in T and (b,a) not in T
and used.isdisjoint((a,b)) and not used.update((a,b))]
print(r)
[('a', 'd'), ('b', 'e'), ('c', 'f')]
The letters are sorted in descending order of frequency in T before combining them. This ensures that the hardest letters to match are processed first thus maximizing the pairing potential for the remaining letters.
Alternatively, you could use a recursive (DP) approach and check all possible pairing combinations.
def maxTuples(L,T):
maxCombos = [] # will return longest
for i,a in enumerate(L): # first letter of tuple
for j,b in enumerate(L[i+1:],i+1): # second letter of tuple
if (a,b) in T: continue # tuple not in T
if (b,a) in T: continue # inverted tuple not in T
rest = L[:i]+L[i+1:j]+L[j+1:] # recurse with rest of letters
R = [(a,b)]+maxTuples(rest,T) # adding to selected pair
if len(R)*2+1>=len(L): return R # max possible, stop here
if len(R)>len(maxCombos): # longer combination of tuples
maxCombos = R # Track it
return maxCombos
...
L=['a','b','c','d','e','f']
T=[('a','b'),('a','c'),('b','c'),('c','f')]
print(maxTuples(L,T))
[('a', 'd'), ('b', 'f'), ('c', 'e')]
L = list("ABCDEFGHIJKLMNOP")
T = [('K', 'N'), ('G', 'F'), ('I', 'P'), ('C', 'A'), ('O', 'M'),
('D', 'B'), ('L', 'J'), ('E', 'H'), ('F', 'E'), ('L', 'H'),
('J', 'G'), ('N', 'I'), ('C', 'M'), ('A', 'P'), ('D', 'O'),
('K', 'B'), ('G', 'H'), ('O', 'A'), ('I', 'J'), ('N', 'M'),
('F', 'P'), ('E', 'B'), ('K', 'L'), ('D', 'C'), ('D', 'E'),
('L', 'F'), ('B', 'H'), ('I', 'A'), ('K', 'G'), ('M', 'O'),
('P', 'C'), ('N', 'J'), ('J', 'E'), ('N', 'P'), ('A', 'G'),
('H', 'O'), ('I', 'B'), ('K', 'F'), ('M', 'C'), ('L', 'D'),
('A', 'B'), ('C', 'E'), ('D', 'F'), ('G', 'I'), ('H', 'J'),
('K', 'M'), ('L', 'N'), ('O', 'P')]
print(maxTuples(L,T))
[('A', 'D'), ('B', 'C'), ('E', 'G'), ('F', 'H'),
('I', 'K'), ('J', 'M'), ('L', 'P'), ('N', 'O')]
Note that the function will be slow if the tuples in T exclude so many pairings that it is impossible to produce a combination of len(L)/2 tuples. It can be optimized further by filtering letters that are completely excluded as we go down the recursion:
def maxTuples(L,T):
if not isinstance(T,dict):
T,E = {c:{c} for c in L},T # convert T to a dictionary
for a,b in E: T[a].add(b);T[b].add(a) # of excluded letter sets
L = [c for c in L if not T[c].issuperset(L)] # filter fully excluded
maxCombos = [] # will return longest
for i,a in enumerate(L): # first letter of tuple
for j,b in enumerate(L[i+1:],i+1): # second letter of tuple
if b in T[a]: continue # exclude tuples in T
rest = L[:i]+L[i+1:j]+L[j+1:] # recurse with rest of letters
R = [(a,b)]+maxTuples(rest,T) # adding to selected pair
if len(R)*2+1>=len(L): return R # max possible, stop here
if len(R)>len(maxCombos): # longer combination of tuples
maxCombos = R # Track it
return maxCombos
I have a big list of lists of tuples like
actions = [ [('d', 'r'), ... ('c', 'e'),('', 'e')],
[('r', 'e'), ... ('c', 'e'),('d', 'r')],
... ,
[('a', 'b'), ... ('c', 'e'),('c', 'h')]
]
and i want to find the co-occurrences of the tuples.
I have tried the sugestions from this question but the accepted answer is just too slow. For example in a list with 1494 list of tuple, the resulting dictionary size is 18225703 and took hours to run for 2 tuple co-occurence. So plain permutation and counting doesn't seem to be the answer since i have a bigger list.
I expect the output to somewhat extract the most common pairs (2) or more (3,4,5 at most) tuples that co-occur the most. Using the previous list as example:
('c', 'e'),('d', 'r')
would a common co-occurence when searching for pairs since they co-occur frequently. Is there an efficient method to achieve this?
I think there is no hope for a faster algorithm: you have to compute the combinations to count them. However, if there is threshold of co-occurrences under which you are not interested, you can rty to reduce the complexity of the algorithm. In both cases, there is a hope for less space complexity.
Let's take a small example:
>>> actions = [[('d', 'r'), ('c', 'e'),('', 'e')],
... [('r', 'e'), ('c', 'e'),('d', 'r')],
... [('a', 'b'), ('c', 'e'),('c', 'h')]]
General answer
This answer is probably the best for a large list of lists, but you can avoid creating intermediary lists. First, create an iterable on all present pairs of elements (elements are pairs too in your case, but that doesn't matter):
>>> import itertools
>>> it = itertools.chain.from_iterable(itertools.combinations(pair_list, 2) for pair_list in actions)
If we want to see the result, we have to consume the iteratable:
>>> list(it)
[(('d', 'r'), ('c', 'e')), (('d', 'r'), ('', 'e')), (('c', 'e'), ('', 'e')), (('r', 'e'), ('c', 'e')), (('r', 'e'), ('d', 'r')), (('c', 'e'), ('d', 'r')), (('a', 'b'), ('c', 'e')), (('a', 'b'), ('c', 'h')), (('c', 'e'), ('c', 'h'))]
Then count the sorted pairs (with a fresh it!)
>>> it = itertools.chain.from_iterable(itertools.combinations(pair_list, 2) for pair_list in actions)
>>> from collections import Counter
>>> c = Counter((a,b) if a<=b else (b,a) for a,b in it)
>>> c
Counter({(('c', 'e'), ('d', 'r')): 2, (('', 'e'), ('d', 'r')): 1, (('', 'e'), ('c', 'e')): 1, (('c', 'e'), ('r', 'e')): 1, (('d', 'r'), ('r', 'e')): 1, (('a', 'b'), ('c', 'e')): 1, (('a', 'b'), ('c', 'h')): 1, (('c', 'e'), ('c', 'h')): 1})
>>> c.most_common(2)
[((('c', 'e'), ('d', 'r')), 2), ((('', 'e'), ('d', 'r')), 1)]
At least in term of space, this solution should be efficient since everything is lazy and the number of elements of the Counter is the number of combinations from elements in the same list, that is at most N(N-1)/2 where N is the number of distinct elements in all the lists ("at most" because some elements never "meet" each other and therefore some combination never happen).
The time complexity is O(M . L^2) where M is the number of lists and L the size of the largest list.
With a threshold on the co-occurences number
I assume that all elements in a list are distinct. The key idea is that if an element is present in only one list, then this element has strictly no chance to beat anyone at this game: it will have 1 co-occurence with all his neighbors, and 0 with the elements of other lists. If there are a lot of "orphans", it might be useful to remove them before processing computing the combinations:
>>> d = Counter(itertools.chain.from_iterable(actions))
>>> d
Counter({('c', 'e'): 3, ('d', 'r'): 2, ('', 'e'): 1, ('r', 'e'): 1, ('a', 'b'): 1, ('c', 'h'): 1})
>>> orphans = set(e for e, c in d.items() if c <= 1)
>>> orphans
{('a', 'b'), ('r', 'e'), ('c', 'h'), ('', 'e')}
Now, try the same algorithm:
>>> it = itertools.chain.from_iterable(itertools.combinations((p for p in pair_list if p not in orphans), 2) for pair_list in actions)
>>> c = Counter((a,b) if a<=b else (b,a) for a,b in it)
>>> c
Counter({(('c', 'e'), ('d', 'r')): 2})
Note the comprehension: no brackets but parentheses.
If you have K orphans in a list of N elements, your time complexity for that list falls from N(N-1)/2 to (N-K)(N-K-1)/2, that is (if I'm not mistaken!) K.(2N-K-1) combinations less.
This can be generalized: if an element is present in two or less lists, then it will have at most 2 co-occurrences with other elements, and so on.
If this is still to slow, then switch to a faster language.
I have a tuple that looks like:
t=(('a','b'),('a','c','d','e'),('c','d','e'))
I need to rearrange it so I have a new tuple that will look like:
t2=(('a','b'),('a','c'),('c','d'),('d','e'),('c','d'),('d','e'))
Basically the new tuple takes pairs (of 2) from each element of the old tuple. But I am not sure how to get started. Thanks for your help.
Use a generator expression with zip to pair and convert to a tuple at the end:
>>> t = (('a','b'),('a','c','d','e'),('c','d','e'))
>>> tuple((x) for tupl in t for x in zip(tupl, tupl[1:]))
(('a', 'b'), ('a', 'c'), ('c', 'd'), ('d', 'e'), ('c', 'd'), ('d', 'e'))
Try this out :
tuple([(t[i][j],t[i][j+1]) for i in range(len(t)) for j in range(len(t[i])-1)])
#[('a', 'b'), ('a', 'c'), ('c', 'd'), ('d', 'e'), ('c', 'd'), ('d', 'e')]
You can also try another way. If the problem is reduced to do this for one tuple alone :
def pairs(my_tuple):
return [(my_tuple[i],my_tuple[i+1]) for i in range(len(my_tuple)-1)]
Then this can be mapped for all the tuples
tuple(sum(list(map(pairs,t)),[]))
#(('a', 'b'), ('a', 'c'), ('c', 'd'), ('d', 'e'), ('c', 'd'), ('d', 'e'))
Explanation :
map(pairs,t) : maps the function pairs for every element in tuple t
list(map(pairs,t)) : output of the above
But as a nested list
[[[('a', 'b')], [('a', 'c'), ('c', 'd'), ('d', 'e')],...]
sum(list(...),[]) : Flattens out this nested list for the desired output
Here's what I came up with really quick
def transform(t):
out = []
for tup in t:
for i in range(0, len(tup) - 1):
out.append((tup[i], tup[i+1]))
return tuple(out)
You can use this easy to understand code:
t = (('a','b'),('a','c','d','e'),('c','d','e'))
t2 = []
for i in t:
for j in range(len(i)-1):
t2.append((i[j], i[j+1]))
t2 = tuple(t2)
Obviously it isn't very optimized like other answers but for an easy understanding it will be perfect.
That is something equivalent to:
t2 = tuple((i[j], i[j+1]) for i in t for j in range(len(i)-1))
That is a generator expression, something quite similar to list comprehension (it use brackets instead of square brackets) and they basically do similar things, or at least in basic codes like this one. I still don't understand very well their differences but the generators are one-time fasters while the list comprehension are slower but reusable...
Nevermind: the generator means:
t2 = tuple(...) # Make with the result array a tuple, otherwise it will be a list.
for i in t # Iterate over each item of t, they will by called i.
for i in t for j in range(len(i)) # Iterate over each item of t --called--> i and then iterate over the range(len(i)) --called--> j.
(i[j], i[j+1]) for i in t for j in range(len(i)) # The same as before but each time we get a new j (each time the second loop iterate) do --> (i[j], i[j+1])
I know, make two generator/list expression/comprehension on the same line is strange. I always look at an answer like this one to remember how to do that.
My old answer was:
t = (('a','b'),('a','c','d','e'),('c','d','e'))
t2 = []
for i in t:
for j in range(len(i)):
if j < len(i) - 1:
t2.append((i[j], i[j+1]))
t2 = tuple(t2)
But I notice that adding a -1 to the len() of the loop I can avoid that line, because I won't never get an out of index error.
I don't think the title does a great job acting as a high level explanation of the problem, but I do think this is an interesting problem to try to solve:
Given a python list of tuples of length 2:
pairs = [('G', 'H'), ('C', 'D'), ('B', 'D'), ('A', 'B'), ('B', 'C')]
I would like to create a new list containing tuples of length 3, on the condition that the tuple ('X', 'Y', 'Z') is created only if the pairs ('X', 'Y'), ('Y', 'Z'), and ('X', 'Z') all appear as tuples in the pairs list. In the case of my pairs list, only the triplet ('B', 'C', 'D') would be created (preferably alphabetically).
I haven't used python in several months, so am a bit rusty and would prefer to solve this using mostly base python packages, but open to any suggestions. Thanks in advance for any help!
I'd use itertools to check if all the pairs exist.
from itertools import combinations
doubles = [('G', 'H'), ('C', 'D'), ('B', 'D'), ('A', 'B'), ('B', 'C')]
keys = set([x for double in doubles for x in double])
options = combinations(keys, 3)
triples = list()
for option in options:
x, y, z = sorted(option)
first, second, third = (x, y), (x, z), (y, z)
if first in doubles and second in doubles and third in doubles:
triples.append(option)
This assumes that all the tuples in your list are already sorted though.
vals = set([i for (i, j) in pairs] + [j for (i, j) in pairs])
triples = [(i, j, k) for i in vals
for j in vals
for k in vals
if (((i, j) in pairs) and
((j, k) in pairs) and
((i, k) in pairs))]
Now, this only works if the order of the tuples matter. If not, you'd want to include the reverse-order tuples in pairs as well