Given a list of lists, print list of lists where each list in the output is a combination of elements from the input lists.
For example:
I/P -> [['a','b'],['c'],['d','e','f']]
o/p -> ['a','c','d'], ['a','c','e'], ['a','c','f'], ['b','c','d'], ['b','c','e'], ['b','c','f']
I have come up with the backtracking solution. Below is the code. However, i have difficulty in finding its time complexity. I think it's O(m^n), where m is the length of longest list in the given list of lists and n is the length of the given list. Is it right? How to find time complexity of backtracking problems like these?
def helper(lists, low, high, temp):
if len(temp) == high:
print temp
for i in range(low, high):
for j in range(len(lists[i])):
helper(lists, i+1, high, temp+[lists[i][j]])
if __name__ == "__main__":
l = [['a','b'],['c'],['d','e','f']]
helper(l, 0, len(l), [])
Towards the complexity question:
If there are K lists each of length n_k, for k = 1,...,K, then the total number lists you need to output is n_1 * n_2 * ... * n_K (assuming order does not matter). Your bound clearly holds and is sharp when n_1 = n_2 = ... = n_k.
Alternatively, we can let N = n_1 + ... + n_k be the size of the disjoint union of input lists, and look for a bound in terms of N. For a fixed N, the worst case occurs when n_1 == n_2 etc., and we get O((N/k)^k). Maximizing over k, we find that k=N/e where e the Euler's number. So, we have O(e^(1/e)^N) ~ O(1.44^N).
As LeopardShark suggests, you can look up the itertools implementation of product for reference. It will not improve the asymptotic speed, but it will be more space efficient due to lazy returns.
A more tidy Python implementation could be as follows:
def custom_product(lsts):
buf = [[]]
for lst in lsts:
buf = [b + [x] for b in buf for x in lst]
return buf
What you are effectively doing is re-implementing itertools.product().
Your code above is equivalent to
import itertools
if __name__ == "__main__":
l = [['a','b'],['c'],['d','e','f']]
l2 = itertools.product(*l)
for x in l2:
print(list(x))
I think the time complexity of both solutions is O(number of lists × product of lengths of lists), but itertools.product() will be much faster, being written in C and properly optimised.
Related
I'm trying to write the fastest algorithm possible to return the number of "magic triples" (i.e. x, y, z where z is a multiple of y and y is a multiple of x) in a list of 3-2000 integers.
(Note: I believe the list was expected to be sorted and unique but one of the test examples given was [1,1,1] with the expected result of 1 - that is a mistake in the challenge itself though because the definition of a magic triple was explicitly noted as x < y < z, which [1,1,1] isn't. In any case, I was trying to optimise an algorithm for sorted lists of unique integers.)
I haven't been able to work out a solution that doesn't include having three consecutive loops and therefore being O(n^3). I've seen one online that is O(n^2) but I can't get my head around what it's doing, so it doesn't feel right to submit it.
My code is:
def solution(l):
if len(l) < 3:
return 0
elif l == [1,1,1]:
return 1
else:
halfway = int(l[-1]/2)
quarterway = int(halfway/2)
quarterIndex = 0
halfIndex = 0
for i in range(len(l)):
if l[i] >= quarterway:
quarterIndex = i
break
for i in range(len(l)):
if l[i] >= halfway:
halfIndex = i
break
triples = 0
for i in l[:quarterIndex+1]:
for j in l[:halfIndex+1]:
if j != i and j % i == 0:
multiple = 2
while (j * multiple) <= l[-1]:
if j * multiple in l:
triples += 1
multiple += 1
return triples
I've spent quite a lot of time going through examples manually and removing loops through unnecessary sections of the lists but this still completes a list of 2,000 integers in about a second where the O(n^2) solution I found completes the same list in 0.6 seconds - it seems like such a small difference but obviously it means mine takes 60% longer.
Am I missing a really obvious way of removing one of the loops?
Also, I saw mention of making a directed graph and I see the promise in that. I can make the list of first nodes from the original list with a built-in function, so in principle I presume that means I can make the overall graph with two for loops and then return the length of the third node list, but I hit a wall with that too. I just can't seem to make progress without that third loop!!
from array import array
def num_triples(l):
n = len(l)
pairs = set()
lower_counts = array("I", (0 for _ in range(n)))
upper_counts = lower_counts[:]
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[i] += 1
upper_counts[j] += 1
return sum(nx * nz for nz, nx in zip(lower_counts, upper_counts))
Here, lower_counts[i] is the number of pairs of which the ith number is the y, and z is the other number in the pair (i.e. the number of different z values for this y).
Similarly, upper_counts[i] is the number of pairs of which the ith number is the y, and x is the other number in the pair (i.e. the number of different x values for this y).
So the number of triples in which the ith number is the y value is just the product of those two numbers.
The use of an array here for storing the counts is for scalability of access time. Tests show that up to n=2000 it makes negligible difference in practice, and even up to n=20000 it only made about a 1% difference to the run time (compared to using a list), but it could in principle be the fastest growing term for very large n.
How about using itertools.combinations instead of nested for loops? Combined with list comprehension, it's cleaner and much faster. Let's say l = [your list of integers] and let's assume it's already sorted.
from itertools import combinations
def div(i,j,k): # this function has the logic
return l[k]%l[j]==l[j]%l[i]==0
r = sum([div(i,j,k) for i,j,k in combinations(range(len(l)),3) if i<j<k])
#alaniwi provided a very smart iterative solution.
Here is a recursive solution.
def find_magicals(lst, nplet):
"""Find the number of magical n-plets in a given lst"""
res = 0
for i, base in enumerate(lst):
# find all the multiples of current base
multiples = [num for num in lst[i + 1:] if not num % base]
res += len(multiples) if nplet <= 2 else find_magicals(multiples, nplet - 1)
return res
def solution(lst):
return find_magicals(lst, 3)
The problem can be divided into selecting any number in the original list as the base (i.e x), how many du-plets we can find among the numbers bigger than the base. Since the method to find all du-plets is the same as finding tri-plets, we can solve the problem recursively.
From my testing, this recursive solution is comparable to, if not more performant than, the iterative solution.
This answer was the first suggestion by #alaniwi and is the one I've found to be the fastest (at 0.59 seconds for a 2,000 integer list).
def solution(l):
n = len(l)
lower_counts = dict((val, 0) for val in l)
upper_counts = lower_counts.copy()
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[lower] += 1
upper_counts[upper] += 1
return sum((lower_counts[y] * upper_counts[y] for y in l))
I think I've managed to get my head around it. What it is essentially doing is comparing each number in the list with every other number to see if the smaller is divisible by the larger and makes two dictionaries:
One with the number of times a number is divisible by a larger
number,
One with the number of times it has a smaller number divisible by
it.
You compare the two dictionaries and multiply the values for each key because the key having a 0 in either essentially means it is not the second number in a triple.
Example:
l = [1,2,3,4,5,6]
lower_counts = {1:5, 2:2, 3:1, 4:0, 5:0, 6:0}
upper_counts = {1:0, 2:1, 3:1, 4:2, 5:1, 6:3}
triple_tuple = ([1,2,4], [1,2,6], [1,3,6])
The idea is to create all possible combinations of [a,b,c,d][e,f,g,h] where a,b,c,d,e,f,g,h are distinct integers ranging from 1 to n. Order doesn't matter so if I have [a,b,c,d] I don't want [c,b,d,a]. Same applies for [e,f,g,h].
I have the code below which works but has the drawback of being a) extremely slow and b) take an insane amount of memory (I'm currently trying n=30 and using 13+ GB of memory.)
def build(n):
a = []
b = []
for i in range(1,n):
for j in [x for x in range(1,n) if x!= i]:
for k in [y for y in range(1,n) if (y!= i and y !=j)]:
for l in [z for z in range(1,n) if (z!= i and z!=j and z !=k)]:
if sorted([i,j,k,l]) not in a:
a.append(sorted([i,j,k,l]))
b = a
c = [i for i in product(a,b) if list(set(i[0]).intersection(i[1])) == []]
print 'INFO: done building (total: %d sets)'%len(c)
return c
Is there a more efficient way of achieving what I want?
Going off the top of my head, so there might be some bad syntax in here. Should be enough to give you an idea how you could properly approach the problem on your own, though:
import itertools
def quads(n, required_results=None):
arr1, arr2 = range(1,n+1), range(1,n+1)
results = set() # only admits unique combinations
for combination in itertools.product(arr1, arr2):
results.add(combination)
if required_results and required_results = len(results):
# if the second argument is passed, no need to go through the whole combination-space
break
return results
I'm very new to programming, so please go easy on me :)
How can I make the following python code output quicker-
n = int(input())
a = [int(x) for x in input().split()]
assert(len(a) == n)
result = 0
for i in range(0, n):
for j in range(i+1, n):
if a[i]*a[j] > result:
result = a[i]*a[j]
print(result)
What are some options to maximize its speed?
All you are trying to do is to find two different elements whos product is the biggest. This happens when two numbers are the biggest. So you basically have find two biggest numbers in the array. This can be done in O(n).
One example how this can be done. Find the biggest number and it's position. Save it, remove it and find the max of the result (which will be second biggest). Now multiply both.
You are trying to compute the maximal product of distinct integers in a list.
Computing product for every pair has O(N^2) time complexity.
You can reduce this to O(N log(N)) by sorting the list.
a = sorted(a)
ans = max(a[0] * a[1], a[-2] * a[-1])
You can further improve this to linear time, but I will let you figure it out.
I see, Stepik.org course lesson "Maximum pairwise product":
Try the following, instead of the cascaded for-loop:
n1 = max(a)
a.remove(n1)
result = n1*max(a)
Which essentially saves the biggest value in a variable, removes it from the sequence, and multiplies it with the now biggest value of the sequence (2nd biggest value in the original sequence).
My solution would use sorted and set,as this algorithm work on duplicate items as well.
x, y = sorted(set(nums))[-2:]
print(x * y)
I came up with this O(n) solution:
def maxPairwise(l):
if len(l) == 2:
return l[0]*l[1]
elif len(l) == 1:
return l[0]
else:
max1 = 0
max2 = 0
for i in l:
if i > max1:
max1 = i
if i > max2 and i != max1:
max2 = i
return max1 * max2
Is there a fast algorithm to compute the i-th element (0 <= i < n) of the k-th permutation (0 <= k < n!) of the sequence 0..n-1?
Any order of the permutations may be chosen, it does not have to be lexicographical. There are algorithms that construct the k-th permutation in O(n) (see below). But here the complete permutation is not needed, just its i-th element. Are there algorithms that can do better than O(n)?
Is there an algorithm that has a space complexity less than O(n)?
There are algorithms that construct the k-th permutation by working on an array of size n (see below), but the space requirements might be undesirable for large n. Is there an algorithm that needs less space, especially when only the i-th element is needed?
Algorithm that constructs the k-th permutation of the sequence 0..n-1 with a time and space complexity of O(n):
def kth_permutation(n, k):
p = range(n)
while n > 0:
p[n - 1], p[k % n] = p[k % n], p[n - 1]
k /= n
n -= 1
return p
Source: http://webhome.cs.uvic.ca/~ruskey/Publications/RankPerm/MyrvoldRuskey.pdf
What jkff said. You could modify an algorithm like the one you posted to just return the i-th element of the k-th permutation, but you won't save much time (or space), and you certainly won't reduce the Big-O complexity of the basic algorithm.
The unordered permutation code that you posted isn't really amenable to modification because it has to loop over all the elements performing its swaps, and it's painful to determine if it's possible to break out of the loop early.
However, there's a similar algorithm which produces ordered permutations, and it is possible to break out of that one early, but you still need to perform i inner loops to get the i-th element of the k-th permutation.
I've implemented this algorithm as a class, just to keep the various constants it uses tidy. The code below produces full permutations, but it should be easy to modify to just return the i-th element.
#!/usr/bin/env python
''' Ordered permutations using factorial base counting
Written by PM 2Ring 2015.02.15
Derived from C code written 2003.02.13
'''
from math import factorial
class Permuter(object):
''' A class for making ordered permutations, one by one '''
def __init__(self, seq):
self.seq = list(seq)
self.size = len(seq)
self.base = factorial(self.size - 1)
self.fac = self.size * self.base
def perm(self, k):
''' Build kth ordered permutation of seq '''
seq = self.seq[:]
p = []
base = self.base
for j in xrange(self.size - 1, 0, -1):
q, k = divmod(k, base)
p.append(seq.pop(q))
base //= j
p.append(seq[0])
return p
def test(seq):
permuter = Permuter(seq)
for k in xrange(permuter.fac):
print '%2d: %s' % (k, ''.join(permuter.perm(k)))
if __name__ == '__main__':
test('abcd')
This algorithm has a little more overhead than the unordered permutation maker: it requires factorial to be calculated in advance, and of course factorial gets very large very quickly. Also, it requires one extra division per inner loop. So the time savings in bailing out of the inner loop once you've found the i-th element may be offset by these overheads.
FWIW, the code in your question has room for improvement. In particular, k /= n should be written as k //= n to ensure that integer division is used; your code works ok on Python 2 but not on Python 3. However, since we need both the quotient and remainder, it makes sense to use the built-in divmod() function. Also, by reorganizing things a little we can avoid the multiple calculations of n - 1
#!/usr/bin/env python
def kth_permutation(n, k):
p = range(n)
while n:
k, j = divmod(k, n)
n -= 1
p[n], p[j] = p[j], p[n]
return p
def test(n):
last = range(n)
k = 0
while True:
p = kth_permutation(n, k)
print k, p
if p == last:
break
k += 1
test(3)
output
0 [1, 2, 0]
1 [2, 0, 1]
2 [1, 0, 2]
3 [2, 1, 0]
4 [0, 2, 1]
5 [0, 1, 2]
You probably cannot get the i'th digit of the k'th permutation of n elements in O(n) time or space, because representing the number k itself requires O(log(n!)) = O(n log n) bits, and any manipulations with it have corresponding time complexity.
Given 2 lists of positive integers, find how many ways you can select a number from each of the lists such that their sum is a prime number.
My code is tooo slow As i have both list1 and list 2 containing 50000 numbers each. So any way to make it faster so it solves it in minutes instead of days?? :)
# 2 is the only even prime number
if n == 2: return True
# all other even numbers are not primes
if not n & 1: return False
# range starts with 3 and only needs to go
# up the squareroot of n for all odd numbers
for x in range(3, int(n**0.5)+1, 2):
if n % x == 0: return False
return True
for i2 in l2:
for i1 in l1:
if isprime(i1 + i2):
n = n + 1 # increasing number of ways
s = "{0:02d}: {1:d}".format(n, i1 + i2)
print(s) # printing out
Sketch:
Following #Steve's advice, first figure out all the primes <= max(l1) + max(l2). Let's call that list primes. Note: primes doesn't really need to be a list; you could instead generate primes up the max one at a time.
Swap your lists (if necessary) so that l2 is the longest list. Then turn that into a set: l2 = set(l2).
Sort l1 (l1.sort()).
Then:
for p in primes:
for i in l1:
diff = p - i
if diff < 0:
# assuming there are no negative numbers in l2;
# since l1 is sorted, all diffs at and beyond this
# point will be negative
break
if diff in l2:
# print whatever you like
# at this point, p is a prime, and is the
# sum of diff (from l2) and i (from l1)
Alas, if l2 is, for example:
l2 = [2, 3, 100000000000000000000000000000000000000000000000000]
this is impractical. It relies on that, as in your example, max(max(l1), max(l2)) is "reasonably small".
Fleshed out
Hmm! You said in a comment that the numbers in the lists are up to 5 digits long. So they're less than 100,000. And you said at the start that the list have 50,000 elements each. So they each contain about half of all possible integers under 100,000, and you're going to have a very large number of sums that are primes. That's all important if you want to micro-optimize ;-)
Anyway, since the maximum possible sum is less than 200,000, any way of sieving will be fast enough - it will be a trivial part of the runtime. Here's the rest of the code:
def primesum(xs, ys):
if len(xs) > len(ys):
xs, ys = ys, xs
# Now xs is the shorter list.
xs = sorted(xs) # don't mutate the input list
sum_limit = xs[-1] + max(ys) # largest possible sum
ys = set(ys) # make lookups fast
count = 0
for p in gen_primes_through(sum_limit):
for x in xs:
diff = p - x
if diff < 0:
# Since xs is sorted, all diffs at and
# beyond this point are negative too.
# Since ys contains no negative integers,
# no point continuing with this p.
break
if diff in ys:
#print("%s + %s = prime %s" % (x, diff, p))
count += 1
return count
I'm not going to supply my gen_primes_through(), because it's irrelevant. Pick one from the other answers, or write your own.
Here's a convenient way to supply test cases:
from random import sample
xs = sample(range(100000), 50000)
ys = sample(range(100000), 50000)
print(primesum(xs, ys))
Note: I'm using Python 3. If you're using Python 2, use xrange() instead of range().
Across two runs, they each took about 3.5 minutes. That's what you asked for at the start ("minutes instead of days"). Python 2 would probably be faster. The counts returned were:
219,334,097
and
219,457,533
The total number of possible sums is, of course, 50000**2 == 2,500,000,000.
About timing
All the methods discussed here, including your original one, take time proportional to the product of two lists' lengths. All the fiddling is to reduce the constant factor. Here's a huge improvement over your original:
def primesum2(xs, ys):
sum_limit = max(xs) + max(ys) # largest possible sum
count = 0
primes = set(gen_primes_through(sum_limit))
for i in xs:
for j in ys:
if i+j in primes:
# print("%s + %s = prime %s" % (i, j, i+j))
count += 1
return count
Perhaps you'll understand that one better. Why is it a huge improvement? Because it replaces your expensive isprime(n) function with a blazing fast set lookup. It still takes time proportional to len(xs) * len(ys), but the "constant of proportionality" is slashed by replacing a very expensive inner-loop operation with a very cheap operation.
And, in fact, primesum2() is faster than my primesum() in many cases too. What makes primesum() faster in your specific case is that there are only around 18,000 primes less than 200,000. So iterating over the primes (as primesum() does) goes a lot faster than iterating over a list with 50,000 elements.
A "fast" general-purpose function for this problem would need to pick different methods depending on the inputs.
You should use the Sieve of Eratosthenes to calculate prime numbers.
You are also calculating the prime numbers for each possible combination of sums. Instead, consider finding the maximum value you can achieve with the sum from the lists. Generate a list of all the prime numbers up to that maximum value.
Whilst you are adding up the numbers, you can see if the number appears in your prime number list or not.
I would find the highest number in each range. The range of primes is the sum of the highest numbers.
Here is code to sieve out primes:
def eras(n):
last = n + 1
sieve = [0, 0] + list(range(2, last))
sqn = int(round(n ** 0.5))
it = (i for i in xrange(2, sqn + 1) if sieve[i])
for i in it:
sieve[i * i:last:i] = [0] * (n // i - i + 1)
return filter(None, sieve)
It takes around 3 seconds to find the primes up to 10 000 000. Then I would use the same n ^ 2 algorithm you are using for generating sums. I think there is an n logn algorithm but I can't come up with it.
It would look something like this:
from collections import defaultdict
possible = defaultdict(int)
for x in range1:
for y in range2:
possible[x + y] += 1
def eras(n):
last = n + 1
sieve = [0, 0] + list(range(2, last))
sqn = int(round(n ** 0.5))
it = (i for i in xrange(2, sqn + 1) if sieve[i])
for i in it:
sieve[i * i:last:i] = [0] * (n // i - i + 1)
return filter(None, sieve)
n = max(possible.keys())
primes = eras(n)
possible_primes = set(possible.keys()).intersection(set(primes))
for p in possible_primes:
print "{0}: {1} possible ways".format(p, possible[p])