i-th element of k-th permutation - python

Is there a fast algorithm to compute the i-th element (0 <= i < n) of the k-th permutation (0 <= k < n!) of the sequence 0..n-1?
Any order of the permutations may be chosen, it does not have to be lexicographical. There are algorithms that construct the k-th permutation in O(n) (see below). But here the complete permutation is not needed, just its i-th element. Are there algorithms that can do better than O(n)?
Is there an algorithm that has a space complexity less than O(n)?
There are algorithms that construct the k-th permutation by working on an array of size n (see below), but the space requirements might be undesirable for large n. Is there an algorithm that needs less space, especially when only the i-th element is needed?
Algorithm that constructs the k-th permutation of the sequence 0..n-1 with a time and space complexity of O(n):
def kth_permutation(n, k):
p = range(n)
while n > 0:
p[n - 1], p[k % n] = p[k % n], p[n - 1]
k /= n
n -= 1
return p
Source: http://webhome.cs.uvic.ca/~ruskey/Publications/RankPerm/MyrvoldRuskey.pdf

What jkff said. You could modify an algorithm like the one you posted to just return the i-th element of the k-th permutation, but you won't save much time (or space), and you certainly won't reduce the Big-O complexity of the basic algorithm.
The unordered permutation code that you posted isn't really amenable to modification because it has to loop over all the elements performing its swaps, and it's painful to determine if it's possible to break out of the loop early.
However, there's a similar algorithm which produces ordered permutations, and it is possible to break out of that one early, but you still need to perform i inner loops to get the i-th element of the k-th permutation.
I've implemented this algorithm as a class, just to keep the various constants it uses tidy. The code below produces full permutations, but it should be easy to modify to just return the i-th element.
#!/usr/bin/env python
''' Ordered permutations using factorial base counting
Written by PM 2Ring 2015.02.15
Derived from C code written 2003.02.13
'''
from math import factorial
class Permuter(object):
''' A class for making ordered permutations, one by one '''
def __init__(self, seq):
self.seq = list(seq)
self.size = len(seq)
self.base = factorial(self.size - 1)
self.fac = self.size * self.base
def perm(self, k):
''' Build kth ordered permutation of seq '''
seq = self.seq[:]
p = []
base = self.base
for j in xrange(self.size - 1, 0, -1):
q, k = divmod(k, base)
p.append(seq.pop(q))
base //= j
p.append(seq[0])
return p
def test(seq):
permuter = Permuter(seq)
for k in xrange(permuter.fac):
print '%2d: %s' % (k, ''.join(permuter.perm(k)))
if __name__ == '__main__':
test('abcd')
This algorithm has a little more overhead than the unordered permutation maker: it requires factorial to be calculated in advance, and of course factorial gets very large very quickly. Also, it requires one extra division per inner loop. So the time savings in bailing out of the inner loop once you've found the i-th element may be offset by these overheads.
FWIW, the code in your question has room for improvement. In particular, k /= n should be written as k //= n to ensure that integer division is used; your code works ok on Python 2 but not on Python 3. However, since we need both the quotient and remainder, it makes sense to use the built-in divmod() function. Also, by reorganizing things a little we can avoid the multiple calculations of n - 1
#!/usr/bin/env python
def kth_permutation(n, k):
p = range(n)
while n:
k, j = divmod(k, n)
n -= 1
p[n], p[j] = p[j], p[n]
return p
def test(n):
last = range(n)
k = 0
while True:
p = kth_permutation(n, k)
print k, p
if p == last:
break
k += 1
test(3)
output
0 [1, 2, 0]
1 [2, 0, 1]
2 [1, 0, 2]
3 [2, 1, 0]
4 [0, 2, 1]
5 [0, 1, 2]

You probably cannot get the i'th digit of the k'th permutation of n elements in O(n) time or space, because representing the number k itself requires O(log(n!)) = O(n log n) bits, and any manipulations with it have corresponding time complexity.

Related

What is the time complexity of the following code- find smallest missing number in an array

the problem statement is:
given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A.
For example, given A = [1, 3, 6, 4, 1, 2], the function should return 5.
Given A = [1, 2, 3], the function should return 4.
Given A = [−1, −3], the function should return 1.
Can anyone tell me what the time-complexity of the following solution to the code be:
def solution(A):
m = max(A)
if m < 1:
return 1
A = set(A)
B = set(range(1, m + 1))
D = B - A
if len(D) == 0:
return m + 1
else:
return min(D)
It has to be O(n) or greater because we are finding the max.
I had a doubt if creating a set from range 1 to the max element will be nlogn as the result will be a sorted set of elements..?
The time complexity of this solution is linear: (O(m + n)) where m is the maximum element in A and n is the length of the input array A).
Here's a break down:
max(A) is O(n) since you have to look at each element of A.
set(A) is O(n) since you have to look at each lement of A.
set(range(1, m + 1)) is O(m) since you iterate over every element between 1 and m.
The result is O(m + n). While we can ignore any multiple of n or m (you don't say O(2n)), we can't ignore the minimum of n and m. The reason is that there is no guaranteed relation between n and m. They can be as different as 1 and 1e64, or vice versa.
On a sidenote, this algorithm is really really bad. You can solve this problem in O(n) time and O(n) additional space.
A solution with O(n) time and space complexity could be as follows:
def solve(A):
seen = set()
res = 1
for el in A:
seen.add(el):
while res in seen:
res += 1
return res
It works as follows:
Init res to the best possible result (1)
keep track of seen elements with a set for constant time lookups.
iterate over As elements
add them to the set.
increment res while it is present in seen
The reason this is of O(n) time complexity is that in the worst case the elements of the input array don't have any gap between them, start at one, and we have to increment res n times.
If there's any gap, we've found it already and sit there not incrementing further.

Fast access to sums of pairwise ops

Given a vector of numbers v, I can access sums of sections of this vector by using cumulative sums, i.e., instead of O(n)
v = [1,2,3,4,5]
def sum_v(i,j):
return sum(v[i:j])
I can do O(1)
import itertools
v = [1,2,3,4,5]
cache = [0]+list(itertools.accumulate(v))
def sum_v(i,j):
return cache[j] - cache[i]
Now, I need something similar but for pairwise instead of sum_v:
def pairwise(i,j):
ret = 0
for p in range(i,j):
for q in range(p+1,j):
ret += f(v(p),v(q))
return ret
where f is, preferably, something relatively arbitrary (e.g., * or ^ or ...). However, something working for just product or just XOR would be good too.
PS1. I am looking for a speed-up in terms of O, not generic memoization such as functools.cache.
PS2. The question is about algorithms, not implementations, and is thus language-agnostic. I tagged it python only because my examples are in python.
PS3. Obviously, one can precompute all values of pairwise, so the solution should be o(n^2) both in time and space (preferably linear).
For binary operations such as or, and, xor, an O(N) algorithm is possible.
Let's consider XOR for this example, but this can be easily modified for OR/AND as well.
The most important thing to note here is, the result of a binary operator on bit x of two numbers will not affect the result for bit y. (You can easily see that by trying something like 010 ^ 011 = 001. So we first count the contribution made by the leftmost bits of all numbers to the final sum, then the next least significant bit, and so on. Here's a simple algo/pseudocode for that:
Construct a simple table dp, where dp[i][j] = count of numbers in range [i,n) with jth bit set
l = [5,3,1,7,8]
n = len(l)
ans = 0
max_binary_length = max(log2(i) for i in l)+1 #maximum number of bits we need to check
for j in range(max_binary_length):
# we check the jth bits of all numbers here
for i in range(0,n):
# we need sum((l[i]^l[j]) for j in range (i+1,n))
current = l[i]
if jth bit of current == 0:
# since 0^1 = 1, we need count of numbers with jth bit 1
count = dp[i+1][j]
else:
# we need count of numbers with jth bit 0
count = (n-i)-dp[i+1][j]
# the indexing could be slightly off, you can check that once
ans += count * (2^j)
# since we're checking the jth bit, it will have a value of 2^j when set
print(ans)
In most cases, for integers, number of bits <= 32. So this should have a complexity of O(N*log2(max(A[i]))) == O(N*32) == O(N).
In principle, you can always precompute every possible output in Θ(n²) space and then answer queries in Θ(1) by just looking it up in the precomputed table. Everything else is a trade-off depending on the cost of precomputation time, space, and actual computation time; the interesting question is what can be done with o(n²) space, i.e. sub-quadratic. This will generally depend on the application, and on properties of the binary operation f.
In the particular case where f is *, we can get Θ(1) lookups with only Θ(n) space: we'll take advantage that the sum for pairs where p < q equals the sum of all pairs, minus the sum of pairs where p = q, divided by 2 to account for the pairs where p > q.
# input data
v = [1, 2, 3, 4, 5]
n = len(v)
# precomputation
partial_sums = [0] * (n + 1)
partial_sums_squares = [0] * (n + 1)
for i, x in enumerate(v):
partial_sums[i + 1] = partial_sums[i] + x
partial_sums_squares[i + 1] = partial_sums_squares[i] + x * x
# query response
def pairwise(i, j):
s = partial_sums[j] - partial_sums[i]
s2 = partial_sums_squares[j] - partial_sums_squares[i]
return (s * s - s2) / 2
More generally, this works whenever f is commutative and distributes over the accumulator operation (+ in this case). I wrote the example here without itertools, so that it is more easily translatable to other languages, since the question is meant to be language-agnostic.

Guidance on removing a nested for loop from function

I'm trying to write the fastest algorithm possible to return the number of "magic triples" (i.e. x, y, z where z is a multiple of y and y is a multiple of x) in a list of 3-2000 integers.
(Note: I believe the list was expected to be sorted and unique but one of the test examples given was [1,1,1] with the expected result of 1 - that is a mistake in the challenge itself though because the definition of a magic triple was explicitly noted as x < y < z, which [1,1,1] isn't. In any case, I was trying to optimise an algorithm for sorted lists of unique integers.)
I haven't been able to work out a solution that doesn't include having three consecutive loops and therefore being O(n^3). I've seen one online that is O(n^2) but I can't get my head around what it's doing, so it doesn't feel right to submit it.
My code is:
def solution(l):
if len(l) < 3:
return 0
elif l == [1,1,1]:
return 1
else:
halfway = int(l[-1]/2)
quarterway = int(halfway/2)
quarterIndex = 0
halfIndex = 0
for i in range(len(l)):
if l[i] >= quarterway:
quarterIndex = i
break
for i in range(len(l)):
if l[i] >= halfway:
halfIndex = i
break
triples = 0
for i in l[:quarterIndex+1]:
for j in l[:halfIndex+1]:
if j != i and j % i == 0:
multiple = 2
while (j * multiple) <= l[-1]:
if j * multiple in l:
triples += 1
multiple += 1
return triples
I've spent quite a lot of time going through examples manually and removing loops through unnecessary sections of the lists but this still completes a list of 2,000 integers in about a second where the O(n^2) solution I found completes the same list in 0.6 seconds - it seems like such a small difference but obviously it means mine takes 60% longer.
Am I missing a really obvious way of removing one of the loops?
Also, I saw mention of making a directed graph and I see the promise in that. I can make the list of first nodes from the original list with a built-in function, so in principle I presume that means I can make the overall graph with two for loops and then return the length of the third node list, but I hit a wall with that too. I just can't seem to make progress without that third loop!!
from array import array
def num_triples(l):
n = len(l)
pairs = set()
lower_counts = array("I", (0 for _ in range(n)))
upper_counts = lower_counts[:]
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[i] += 1
upper_counts[j] += 1
return sum(nx * nz for nz, nx in zip(lower_counts, upper_counts))
Here, lower_counts[i] is the number of pairs of which the ith number is the y, and z is the other number in the pair (i.e. the number of different z values for this y).
Similarly, upper_counts[i] is the number of pairs of which the ith number is the y, and x is the other number in the pair (i.e. the number of different x values for this y).
So the number of triples in which the ith number is the y value is just the product of those two numbers.
The use of an array here for storing the counts is for scalability of access time. Tests show that up to n=2000 it makes negligible difference in practice, and even up to n=20000 it only made about a 1% difference to the run time (compared to using a list), but it could in principle be the fastest growing term for very large n.
How about using itertools.combinations instead of nested for loops? Combined with list comprehension, it's cleaner and much faster. Let's say l = [your list of integers] and let's assume it's already sorted.
from itertools import combinations
def div(i,j,k): # this function has the logic
return l[k]%l[j]==l[j]%l[i]==0
r = sum([div(i,j,k) for i,j,k in combinations(range(len(l)),3) if i<j<k])
#alaniwi provided a very smart iterative solution.
Here is a recursive solution.
def find_magicals(lst, nplet):
"""Find the number of magical n-plets in a given lst"""
res = 0
for i, base in enumerate(lst):
# find all the multiples of current base
multiples = [num for num in lst[i + 1:] if not num % base]
res += len(multiples) if nplet <= 2 else find_magicals(multiples, nplet - 1)
return res
def solution(lst):
return find_magicals(lst, 3)
The problem can be divided into selecting any number in the original list as the base (i.e x), how many du-plets we can find among the numbers bigger than the base. Since the method to find all du-plets is the same as finding tri-plets, we can solve the problem recursively.
From my testing, this recursive solution is comparable to, if not more performant than, the iterative solution.
This answer was the first suggestion by #alaniwi and is the one I've found to be the fastest (at 0.59 seconds for a 2,000 integer list).
def solution(l):
n = len(l)
lower_counts = dict((val, 0) for val in l)
upper_counts = lower_counts.copy()
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[lower] += 1
upper_counts[upper] += 1
return sum((lower_counts[y] * upper_counts[y] for y in l))
I think I've managed to get my head around it. What it is essentially doing is comparing each number in the list with every other number to see if the smaller is divisible by the larger and makes two dictionaries:
One with the number of times a number is divisible by a larger
number,
One with the number of times it has a smaller number divisible by
it.
You compare the two dictionaries and multiply the values for each key because the key having a 0 in either essentially means it is not the second number in a triple.
Example:
l = [1,2,3,4,5,6]
lower_counts = {1:5, 2:2, 3:1, 4:0, 5:0, 6:0}
upper_counts = {1:0, 2:1, 3:1, 4:2, 5:1, 6:3}
triple_tuple = ([1,2,4], [1,2,6], [1,3,6])

max sum of list elements each separated by (at least) k elements

given a list of numbers to find the maximum sum of non-adjacent elements with time complexity o(n) and space complexity of o(1), i could use this :
sum1= 0
sum2= list[0]
for i in range(1, len(list)):
num= sum1
sum1= sum2+ list[i]
sum2= max(num, sum2)
print(max(sum2, sum1))
this code will work only if the k = 1 [ only one element between the summing numbers] how could improve it by changing k value using dynamic programming. where k is the number of elements between the summing numbers.
for example:
list = [5,6,4,1,2] k=1
answer = 11 # 5+4+2
list = [5,6,4,1,2] k=2
answer = 8 # 6+2
list = [5,3,4,10,2] k=1
answer = 15 # 5+10
It's possible to solve this with space O(k) and time O(nk). if k is a constant, this fits the requirements in your question.
The algorithm loops from position k + 1 to n. (If the array is shorter than that, it can obviously be solved in O(k)). At each step, it maintains an array best of length k + 1, such that the jth entry of best is the best solution found so far, such that the last element it used is at least j to the left of the current position.
Initializing best is done by setting, for its entry j, the largest non-negative entry in the array in positions 1, ..., k + 1 - j. So, for example, best[1] is the largest non-negative entry in positions 1, ..., k, and best[k + 1] is 0.
When at position i of the array, element i is used or not. If it is used, the relevant best until now is best[1], so write u = max(best[1] + a[i], best[1]). If element i is not used, then each "at least" part shifts one, so for j = 2, ..., k + 1, best[j] = max(best[j], best[j - 1]). Finally, set best[1] = u.
At the termination of the algorithm, the solution is the largest item in best.
EDIT:
I had misunderstood the question, if you need to have 'atleast' k elements in between then following is an O(n^2) solution.
If the numbers are non-negative, then the DP recurrence relation is:
DP[i] = max (DP[j] + A[i]) For all j st 0 <= j < i - k
= A[i] otherwise.
If there are negative numbers in the array as well, then we can use the idea from Kadane's algorithm:
DP[i] = max (DP[j] + A[i]) For all j st 0 <= j < i - k && DP[j] + A[i] > 0
= max(0,A[i]) otherwise.
Here's a quick implementation of the algorithm described by Ami Tavory (as far as I understand it). It should work for any sequence, though if your list is all negative, the maximum sum will be 0 (the sum of an empty subsequence).
import collections
def max_sum_separated_by_k(iterable, k):
best = collections.deque([0]*(k+1), k+1)
for item in iterable:
best.appendleft(max(item + best[-1], best[0]))
return best[0]
This uses O(k) space and O(N) time. All of the deque operations, including appending a value to one end (and implicitly removing one from the other end so the length limit is maintained) and reading from the ends, are O(1).
If you want the algorithm to return the maximum subsequence (rather than only its sum), you can change the initialization of the deque to start with empty lists rather than 0, and then append max([item] + best[-1], best[0], key=sum) in the body of the loop. That will be quite a bit less efficient though, since it adds O(N) operations all over the place.
Not sure for the complexity but coding efficiency landed me with
max([sum(l[i::j]) for j in range(k,len(l)) for i in range(len(l))])
(I've replace list variable by l not to step on a keyword).

Python 2 lists of positive integers finding prime number

Given 2 lists of positive integers, find how many ways you can select a number from each of the lists such that their sum is a prime number.
My code is tooo slow As i have both list1 and list 2 containing 50000 numbers each. So any way to make it faster so it solves it in minutes instead of days?? :)
# 2 is the only even prime number
if n == 2: return True
# all other even numbers are not primes
if not n & 1: return False
# range starts with 3 and only needs to go
# up the squareroot of n for all odd numbers
for x in range(3, int(n**0.5)+1, 2):
if n % x == 0: return False
return True
for i2 in l2:
for i1 in l1:
if isprime(i1 + i2):
n = n + 1 # increasing number of ways
s = "{0:02d}: {1:d}".format(n, i1 + i2)
print(s) # printing out
Sketch:
Following #Steve's advice, first figure out all the primes <= max(l1) + max(l2). Let's call that list primes. Note: primes doesn't really need to be a list; you could instead generate primes up the max one at a time.
Swap your lists (if necessary) so that l2 is the longest list. Then turn that into a set: l2 = set(l2).
Sort l1 (l1.sort()).
Then:
for p in primes:
for i in l1:
diff = p - i
if diff < 0:
# assuming there are no negative numbers in l2;
# since l1 is sorted, all diffs at and beyond this
# point will be negative
break
if diff in l2:
# print whatever you like
# at this point, p is a prime, and is the
# sum of diff (from l2) and i (from l1)
Alas, if l2 is, for example:
l2 = [2, 3, 100000000000000000000000000000000000000000000000000]
this is impractical. It relies on that, as in your example, max(max(l1), max(l2)) is "reasonably small".
Fleshed out
Hmm! You said in a comment that the numbers in the lists are up to 5 digits long. So they're less than 100,000. And you said at the start that the list have 50,000 elements each. So they each contain about half of all possible integers under 100,000, and you're going to have a very large number of sums that are primes. That's all important if you want to micro-optimize ;-)
Anyway, since the maximum possible sum is less than 200,000, any way of sieving will be fast enough - it will be a trivial part of the runtime. Here's the rest of the code:
def primesum(xs, ys):
if len(xs) > len(ys):
xs, ys = ys, xs
# Now xs is the shorter list.
xs = sorted(xs) # don't mutate the input list
sum_limit = xs[-1] + max(ys) # largest possible sum
ys = set(ys) # make lookups fast
count = 0
for p in gen_primes_through(sum_limit):
for x in xs:
diff = p - x
if diff < 0:
# Since xs is sorted, all diffs at and
# beyond this point are negative too.
# Since ys contains no negative integers,
# no point continuing with this p.
break
if diff in ys:
#print("%s + %s = prime %s" % (x, diff, p))
count += 1
return count
I'm not going to supply my gen_primes_through(), because it's irrelevant. Pick one from the other answers, or write your own.
Here's a convenient way to supply test cases:
from random import sample
xs = sample(range(100000), 50000)
ys = sample(range(100000), 50000)
print(primesum(xs, ys))
Note: I'm using Python 3. If you're using Python 2, use xrange() instead of range().
Across two runs, they each took about 3.5 minutes. That's what you asked for at the start ("minutes instead of days"). Python 2 would probably be faster. The counts returned were:
219,334,097
and
219,457,533
The total number of possible sums is, of course, 50000**2 == 2,500,000,000.
About timing
All the methods discussed here, including your original one, take time proportional to the product of two lists' lengths. All the fiddling is to reduce the constant factor. Here's a huge improvement over your original:
def primesum2(xs, ys):
sum_limit = max(xs) + max(ys) # largest possible sum
count = 0
primes = set(gen_primes_through(sum_limit))
for i in xs:
for j in ys:
if i+j in primes:
# print("%s + %s = prime %s" % (i, j, i+j))
count += 1
return count
Perhaps you'll understand that one better. Why is it a huge improvement? Because it replaces your expensive isprime(n) function with a blazing fast set lookup. It still takes time proportional to len(xs) * len(ys), but the "constant of proportionality" is slashed by replacing a very expensive inner-loop operation with a very cheap operation.
And, in fact, primesum2() is faster than my primesum() in many cases too. What makes primesum() faster in your specific case is that there are only around 18,000 primes less than 200,000. So iterating over the primes (as primesum() does) goes a lot faster than iterating over a list with 50,000 elements.
A "fast" general-purpose function for this problem would need to pick different methods depending on the inputs.
You should use the Sieve of Eratosthenes to calculate prime numbers.
You are also calculating the prime numbers for each possible combination of sums. Instead, consider finding the maximum value you can achieve with the sum from the lists. Generate a list of all the prime numbers up to that maximum value.
Whilst you are adding up the numbers, you can see if the number appears in your prime number list or not.
I would find the highest number in each range. The range of primes is the sum of the highest numbers.
Here is code to sieve out primes:
def eras(n):
last = n + 1
sieve = [0, 0] + list(range(2, last))
sqn = int(round(n ** 0.5))
it = (i for i in xrange(2, sqn + 1) if sieve[i])
for i in it:
sieve[i * i:last:i] = [0] * (n // i - i + 1)
return filter(None, sieve)
It takes around 3 seconds to find the primes up to 10 000 000. Then I would use the same n ^ 2 algorithm you are using for generating sums. I think there is an n logn algorithm but I can't come up with it.
It would look something like this:
from collections import defaultdict
possible = defaultdict(int)
for x in range1:
for y in range2:
possible[x + y] += 1
def eras(n):
last = n + 1
sieve = [0, 0] + list(range(2, last))
sqn = int(round(n ** 0.5))
it = (i for i in xrange(2, sqn + 1) if sieve[i])
for i in it:
sieve[i * i:last:i] = [0] * (n // i - i + 1)
return filter(None, sieve)
n = max(possible.keys())
primes = eras(n)
possible_primes = set(possible.keys()).intersection(set(primes))
for p in possible_primes:
print "{0}: {1} possible ways".format(p, possible[p])

Categories

Resources