Factorial running time - python

When analyzing some code I've written, I've come up with the following recursive equation for its running time -
T(n) = n*T(n-1) + n! + O(n^2).
Initially, I assumed that O((n+1)!) = O(n!), and therefore I solved the equation like this -
T(n) = n! + O(n!) + O(n^3) = O(n!)
Reasoning that even had every recursion yielded another n! (instead of (n-1)!, (n-2)! etc.), it would still only come up to n*n! = (n+1)! = O(n!). The last argument is due to sum of squares.
But, after thinking about it some more, I'm not sure my assumption that O((n+1)!) = O(n!) is correct, in fact, I'm pretty sure it isn't.
If I am right in thinking I made a wrong assumption, I'm not really sure how to actually solve the above recursive equation, since there is no formula for the sum of factorials...
Any guidance would be much appreciated.
Thank you!!!

Since you're looking at run-time, I assume O(n^2) is meant to be the number of operations on that term. Under that assumption, n! can be computed in O(n) time (1*2*3*...*n). So, it can be dropped in comparison to the O(n^2) term. T(n-1) is then computed in approximately O((n-1)^2) time which is roughly O(n^2). Putting it all together you have something which runs in
O(n^2) + O(n) + O(n^2)
resulting in an O(n^2) algorithm.

I figured it out.
T(n) = n*T(n-1) + n! + O(n^2) = n*T(n-1) + n! = n*( (n-1)T(n-2) + (n-1)! ) + n! = n(n-1)T(n-2) + 2n! = ... = n! = n*n! = O(n*n!)

The problem with:
T(n) = n*T(n-1) + n! + O(n^2)
Is that you're mixing two different types of terms. Everything left of the final + refers to a number; to the right of that plus is O(n^2) which denotes the class of all functions which grow asymptotically no faster than n^2.
Assuming you mean:
T(n) = n*T(n-1) + n! + n^2
Then T(n) in O(n!) because n! is the fastest growing term in the sum. (Actually, I'm not sure that n*T(n-1) isn't faster growing - my combinatorics isn't that strong).
Expanding out the recursive term, the recursive "call" to n*T(n-1) reduces to some function which is O((n!)!) O(n!), and so the function as a whole is O(n!).
Fully expanding out the recursive term, it will be the fastest growing term. See the comments for various suggestions for the correct expansion.

From what I understand from the source code:
https://github.com/python/cpython/blob/main/Modules/mathmodule.c#L1982-L2032
it must be at most O(n) if not faster.

Related

3 Recursions to Recurrence T(n) and Master Theorem

Suppose I have the below code and my task it to find the recurrence T(n) and its worst runtime. The value if n is the length of the list.
In this case, we have 3 recursions, mystery(mylist[:len(mylist)-t-1]), mystery(mylist[t:len(mylist)- 1]) and mystery(mylist[:len(mylist)-t-1]).
def mystery(mylist):
if len(L) <= 1:
return
if len(mylist) >= 3:
t = len(mylist) // 3
mystery(mylist[:len(mylist)-t-1])
mystery(mylist[t:len(mylist)- 1])
mystery(mylist[:len(mylist)-t-1])
For the recursive case, my observation is because the recursion are together, so the recurrence is:
T(n) = T(floor(2n/3)) + T(floor(n/3)) + T(floor(2n/3)) = 2T(floor(2n/3)) + T(floor(n/3))
Now here is the hard part, to figure out f(n), so I expanded the recursive T(n) and I got more and more T(n)s. How would I be able to figure out f(n)?
For the base case, T(0) and T(1) are 1 because of the first if-statement and T(2) = 0 because there is no if-statement for n=2.
Are my assessments correct?
Thank you!
You are right about the base cases. You could even group T(2) in there. It's still O(1) as you're at least evaluating the two conditional statements, and practically speaking there aren't any O(0) function calls.
The f(n) term in your recurrence is just an expression of all the work you do in the recursive case outside of generating recursive calls. Here you have the O(1) t = len(mylist) // 3 statement and the O(1) cost of evaluating the two conditional statements: O(1) work in total. However, you also have the O(n) cost of slicing your list into three parts to pass into the recursive calls. This gives f(n) = O(n) + O(1) = O(n). From this, we can express the overall recurrence as:
T(n) = 2T(2n/3) + T(n/3) + O(n) if n >=3
T(n) = 1 otherwise
However, the Master Theorem doesn't apply to this case because you have recursive calls which work on different sub-problem sizes: you can't isolate a single a nor b value to apply the Master Theorem. For a recurrence like this, you could apply the generalization of the Master Theorem known as the Akra-Bazzi Method, with the parameters being:
a1=2, a2=1
b1=2/3, b2=1/3
g(n) = n
h1(n) = h2(n) = 0
Following the method, solve 2(2/3)^p + (1/3)^p = 1 for p, then evaluate the integral:
with g(u) = u (as g(n) = n) to determine the complexity class.
If you don't need the exact complexity class, but only want to derive a more simple upper-bound with the Master Theorem, you could upper-bound your recurrence relation for the run-time using the fact that 3T(2n/3) >= 2T(2n/3) + T(n/3):
T(n) <= 3T(2n/3) + O(n) if n >=3
T(n) = 1 otherwise
Then, you can solve this upper bound on the time-complexity with the Master Theorem, with a=3, b=3/2, and f(n)= n^c = n^1 to derive a Big-O (rather than Big-Theta) complexity class.

Time complexity of python function

I am trying to solve the time complexity of this function (I'm still new to solving complexity problems) and was wondering what the time complexity of this function would be:
def mystery(lis):
n = len(lis)
for index in range(n):
x = 2*index % n
lis[index],lis[x] = lis[x],lis[index]
print(lis)
I believe the answer is O(n) but I am not 100% sure as the line: x = 2*index % n is making me wonder if it is maybe O(n log n).
The operation to * two operands together is usually consider constant time in time complexity analysis. Same with %.
The fact that you have n as one of the operand doesn't make it O(n) because n is a single number. To make it O(n) you need to perform an operation n times.

Are the Big-O notations for the following programs correct?

I have the following three programs and I have calculated the Big-O time complexity for each of them. I just want to make sure that I have done it right.
import random
def A(N):
L=[]
for i in range(0,26):
L.append(chr(ord('A')+i))
Alpha=[]
i = 0
while i< N:
flag = 0
x = random.randint(0,N-1)
for j in range(0,i):
if Alpha[j] == L[x]:
flag = 1
break
if flag == 0:
Alpha.append(L[x])
i = i + 1
return Alpha
Complexity for A(N) is [O(1) + O(n) + O(n)] -> O(n^2)
def A2(N):
L=[]
x = ord('A')
for i in range(0,26):
L.append(chr(x+i))
Alpha=[]
i = 0
Ran = [0]*N
while i< N:
x = random.randint(0,N-1)
if Ran[x] == 0 :
Alpha.append(L[x])
i=i+1
Ran[x]=1
return Alpha
Complexity for A2(N) is [O(1) + O(n)] -> O(n)
def A3(N):
L=[]
x = ord('A')
for i in range(0,26):
L.append(chr(x+i))
Alpha=[]
for i in range(0,N):
Alpha.append(L[i])
for i in range(2,N):
x= random.randint(0,i)
temp = Alpha[i]
Alpha[i]= Alpha[x]
Alpha[x] = temp
return Alpha
Complexity for A3(N) is [O(1) + O(n) + O(n)] -> O(n^2)
In the first example, the complexity is not
[O(1) + O(n) + O(n)] -> O(n^2)
but it's [O(1) + sum from i=0 to O(n) of [O(n)]] = sum from i=0 to O(n) of [O(n)] = O(n^2)
In practice, you are executing a O(n) task a maximum of n times - therefore, O(n) times. That's why this is, effectively, a multiplication.
In the second example, you would be correct if the execution of the loop wouldn't rely on random numbers - thanks to Nick Vitha for pointing that out, see his answer - but, unfortunately, it does.
Algorithmic time complexity is usually applied to deterministic algorithms, and here we're talking about a probabilistic algorithm, which is modeled in a different way.
In this case, it's not trivial to get the complexity class, as it depends on the implementation of the random function and on its distribution.
Your randomized algorithm doesn't have either a guarantee of success or a bounded run time, so the complexity class should be undefined, but it would need to be proven stochastically. Moreover, the expected value of the time complexity should be easier to calculate, given the distribution of the random function.
In the third example, it is indeed:
[O(1) + O(n) + O(n)]
but when you add similar classes together you get the same class of time complexity - this is because you would get a multiple of the initial complexity, which in the asymptotic context is equal to it.
So the solution would be:
[O(1) + O(n) + O(n)] -> O(n)
I apologize if, mathematically speaking, my notation is not precise, but I believe that it summarizes the concept enough.
1.
Complexity for A(N) is [O(1) + O(n) + O(n)] -> O(n^2)
This is not correct. Well, the O(n^2) part is correct, but how you got to it is untrue.
[O(1) + O(n) + O(n)] -> O(2n + 1) -> O(n)
However, your code is:
[O(k) + O(n)*O(n)] -> O(n^2 + k) -> O(n^2)
where k is a constant (in this case 26, but it doesn't matter as long as it's not influenced by n). You multiply things that are nested like that. You can simplify O(k) to O(1) if you want. Either way it goes away.
2.
Complexity for A2(N) is [O(1) + O(n)] -> O(n)
Oh god. I am not even sure where to begin to start.
So basically, you're accessing a random part of an array of length N. And you're checking if it's 0. If it is, you do some stuff and assign it to 1.
I'm inclined to believe that the answer to this will be significantly higher than O(n) on average. Someone that's had more coffee and/or math experience will probably need to chime in to that, but you're going to loop through AT LEAST n times and AT WORST this loop will be infinite because you're doing random access and could just keep randomly checking a number that is 1. Usually, you do O() notation using WORST, so this loop is
O(infinity) -> undefined
3.
Complexity for A3(N) is [O(1) + O(n) + O(n)] -> O(n^2)
As said before, it is [O(1) + O(n) + O(n)], but that yields O(n)
For A2, I agree with Nick that it is unbounded (worst case). Your avg case is n * Log(n). Your O(n) is actually the best case.

How do I identify O(nlogn) exactly?

I have understood O(logn) in a sense that it increases quickly but with larger inputs, the rate of increase retards.
I am not able to completely understand
O(nlogn)
the difference between an algorithm with complexity nlogn and complexity n + logn.
I could use a modification of the phone book example and/or some basic python code to understand the two queries
How do you think of O(n ^ 2)?
Personally, I like to think of it as doing O(n) work O(n) times.
A contrived O(n ^ 2) algorithm would be to iterate through all pairs of numbers in 0, 1, ..., n - 1
def print_pairs(n):
for i in range(n):
for j in range(i + 1, n):
print('({},{})'.format(i, j))
Using similar logic as above, you could do O(log n) work O(n) times and have a time complexity of O(n log n).
As an example, we are going to use binary search to find all indices of elements in an array.
Yes, I understand this is a dumb example but here I don't want to focus on the usefulness of the algorithm but rather the complexity. For the sake of the correctness of our algorithm let us assume that the input array is sorted. Otherwise, our binary search does not work as intended and could possibly run indefinitely.
def find_indices(arr):
indices = []
for num in arr:
index = binary_search(arr, 0, len(arr), num)
indices.append(index)
return indices
def binary_search(arr, l, r, x):
# Check base case
if r >= l:
mid = l + (r - l)/2
# If element is present at the middle itself
if arr[mid] == x:
return mid
# If element is smaller than mid, then it
# can only be present in left subarray
elif arr[mid] > x:
return binary_search(arr, l, mid-1, x)
# Else the element can only be present
# in right subarray
else:
return binary_search(arr, mid + 1, r, x)
else:
# Element is not present in the array
return -1
As for your second question,
surely, log n << n as n tends to infinity so
O(n + log n) = O(n)
In theory, the log n is dwarfed by the n as we get arbitrarily large so we don't include it in our Big O analysis.
Juxtaposed to practice, where you might want to consider this extra log n work if your algorithm is suffering performance and/or scaling issues.
log n is a much slower growing function than n. When computer scientists speak of big-O, they are interested in the growth of the function for extremely large input values. What the function does near some small number or inflection point is immaterial.
Many common algorithms have time complexity of n log n. For example, merge sort requires n steps to be taken log_2(n) times as the input data is split in half. After studying the algorithm, the fact that its complexity is n log n may come to you by intuition, but you could arrive at the same conclusion by studying the recurrence relation that describes the (recursive) algorithm--in this case T(n) = 2 * T(n / 2) + n. More generally but perhaps least intuitively, the master theorem can be applied to arrive at this n log n expression. In short, don't feel intimidated if it isn't immediately obvious why certain algorithms have certain running times--there are many ways you can take to approach the analysis.
Regarding "complexity n + log n", this isn't how big-O notation tends to get used. You may have an algorithm that does n + log n work, but instead of calling that O(n + log n), we'd call that O(n) because n grows so much faster than log n that the log n term is negligible. The point of big-O is to state only the growth rate of the fastest growing term.
Compared with n log n, an log n algorithm is less complex. If log n is the time complexity of inserting an item into a self-balancing search tree, n log n would be the complexity of inserting n items into such a structure.
There is Grokking algorithms awesome book that explains algorithms complexity detection (among other things) exhaustively and by a very simple language.
Technically, algorithms with complexity O(n + log n) and complexity O(n) are the same, as the log n term becomes negligible when n grows.
O(n) grows linearly. The slope is constant.
O(n log n) grows super-linearly. The slope increases (slowly).

What's the time complexity for the following python function?

def func(n):
if n == 1:
return 1
return func(n-1) + n*(n-1)
print func(5)
Getting confused. Not sure what exactly it is. Is it O(n)?
Calculating the n*(n-1) is a fixed time operation. The interesting part of the function is calling func(n-1) until n is 1. The function will make n such calls, so it's complexity is O(n).
If we assume that arithmetic operations are constant time operations (and they really are when numbers are relatively small) then time complexity is O(n):
T(n) = T(n-1) + C = T(n-2) + C + C = ... = n * C = O(n)
But the multiplication complexity in practice depends on the underlying type (and we are talking about Python where the type depends on the value). It depends on the N as N approaches infinity. Thus, strictly speaking, the complexity is equal to:
T(n) = O(n * multComplexity(n))
And this multComplexity(n) depends on a specific algorithm that is used for multiplication of huge numbers.
As described in other answers, the answer is close to O(n) for practical purposes. For a more precise analysis, if you don't want to make the approximation that multiplication is constant-time:
Calculating n*(n-1) takes O(log n * log n) (or O(log n)^1.58, depending on the algorithm Python uses, which depends on the size of the integer). See here - note that we need to take the log because the complexity is relative to the number of digits.
Adding the two terms takes O(log n), so we can ignore that.
The multiplication gets done O(n) times, so the total is O(n * log n * log n). (It might be possible to get this bound tighter, but it's certainly larger than O(n) - see the WolframAlpha plot).
In practice, the log terms won't really matter unless n gets very large.

Categories

Resources