what is the Big O notation for that two algorithms:
def foo1(n):
if n > 1:
for i in range(int(n)):
foo1(1)
foo1(n / 2)
def foo2(lst1, lst2):
i = 1
while i < max(len(lst1), len(lst2)):
j = 1
while j < min(len(lst1), len(lst2)):
j *= 2
i *= 2
I thought that foo1 run time complexity is O(n) because in that case if I see the for loop I can do that:
T(n) = O(n) + O(n/2) <= c*O(n) (c is const) for all n.
is that right ?
and I cant calculate the run time of foo2 can some one help me to know to do that.
thanks...
The number of operations T(n) is equal to T(n/2) + n. Applying the Master theorem we get T(n) = O(n). In simple terms there are n + n/2 + n/4 + ... + 1 operations that are less than 2*n and is O(n).
The inner loop does not depend on the outer loop, so we can treat them independently. T(n) = O(log(maxlen) * log(minlen)).
Related
I'm having some trouble figuring out the primitive count of operations for the following lines of code
def question1(n):
n = n # 1 ops
i = 0 # 1 ops
a = 0 # 1 ops
while i < n: # n ops
j = 0 # n ops
while j < n: # n * n ops
k = 0 # n * n ops
while k < 60: # n * n * 60 ops
a = i * j - i * 2 + k # n * n * 60 * 5 ops
k += 1 # n * n * 60 * 2 ops
j += 1 # n * n ops
i += 1 # n ops
# total sum of prim operations = (n * n * 483) + (3 * n) + 3
I'm not sure if
while k < 60: # n * n * 60 ops
a = i * j - i * 2 + k # n * n * 60 * 5 ops
k += 1 # n * n * 60 * 2 ops
Is it really
n * n * 60?
or should it be
n * n * n * 60
"primitive operations" is an ambiguous concept. For instance, a while statement will at some point evaluate the condition as false (which you didn't count) and then make the execution jump to the statement after the loop. One could say those are two operations (evaluation + jump).
Someone could say that k += 1 should count as 3 operations:
load the value of k into a CPU register,
add one to it
store that register's value back in k.
But if Python were compiled into a machine language that has the INC instruction (like NASM), and we deal with fixed-size integers (like 32 bit), it is only one operation.
So this concept is fuzzy, and it is quite useless to sum them up. You should not identify "primitive operations", but identify chunks of code that have a constant time complexity.
Analysis in terms of constant time complexity
First we need to decide whether to think of integer arithmetic operations to be constant in time, or whether we should take into consideration that integers (certainly in Python) can have arbitrary size, and therefore these operations do not have a constant time complexity. See also "bit complexity". I will assume here that you want to regard arithmetic operations as having a constant time complexity.
Then we can identify this chunk of code has having a constant time complexity:
k = 0
while k < 60:
a = i * j - i * 2 + k
k += 1
j += 1
Note here that executing the inner block (that has a constant complexity) 60 times, still means the total has a constant time complexity, since 60 is a constant (independent from the input).
Also the initialisation of integer variables or their incrementing all represent constant time complexity.
There are two nested loops that each iterate 𝑛 times when they get executed. So that means the (above) inner part gets executed 𝑛² times.
Hence, the overal time complexity is O(𝑛²)
I would say it is definitely not n * n * n * 60. You might be confused about asymptotic notation, which may be influencing this question in the first place. The third while loop is executed 60 times, meaning that each operation within it is executed 60 times. This while loop runs 60 times for each of the n iterations of the second while loop, which runs n times for each of the n iterations of the first while loop, yielding n * n * 60.
Although the 60 is involved here, it is still a constant and is therefore of little significance for large values of n. The use of a triple nested loop is more of a trick question in this case, designed to show an example of why the polynomial properties of the algorithm are more important than any constants, just because as n gets large, n * n becomes much larger than 60 does.
Your calculation looks correct, though. The only thing missed is that the following blocks are actually 2 operations each :
j += 1 # n * n * 2 ops, equal to j = j + 1, an assignment AND addition
i += 1 # n * 2 ops ( i = i + 1 )
Suppose I have the below code and my task it to find the recurrence T(n) and its worst runtime. The value if n is the length of the list.
In this case, we have 3 recursions, mystery(mylist[:len(mylist)-t-1]), mystery(mylist[t:len(mylist)- 1]) and mystery(mylist[:len(mylist)-t-1]).
def mystery(mylist):
if len(L) <= 1:
return
if len(mylist) >= 3:
t = len(mylist) // 3
mystery(mylist[:len(mylist)-t-1])
mystery(mylist[t:len(mylist)- 1])
mystery(mylist[:len(mylist)-t-1])
For the recursive case, my observation is because the recursion are together, so the recurrence is:
T(n) = T(floor(2n/3)) + T(floor(n/3)) + T(floor(2n/3)) = 2T(floor(2n/3)) + T(floor(n/3))
Now here is the hard part, to figure out f(n), so I expanded the recursive T(n) and I got more and more T(n)s. How would I be able to figure out f(n)?
For the base case, T(0) and T(1) are 1 because of the first if-statement and T(2) = 0 because there is no if-statement for n=2.
Are my assessments correct?
Thank you!
You are right about the base cases. You could even group T(2) in there. It's still O(1) as you're at least evaluating the two conditional statements, and practically speaking there aren't any O(0) function calls.
The f(n) term in your recurrence is just an expression of all the work you do in the recursive case outside of generating recursive calls. Here you have the O(1) t = len(mylist) // 3 statement and the O(1) cost of evaluating the two conditional statements: O(1) work in total. However, you also have the O(n) cost of slicing your list into three parts to pass into the recursive calls. This gives f(n) = O(n) + O(1) = O(n). From this, we can express the overall recurrence as:
T(n) = 2T(2n/3) + T(n/3) + O(n) if n >=3
T(n) = 1 otherwise
However, the Master Theorem doesn't apply to this case because you have recursive calls which work on different sub-problem sizes: you can't isolate a single a nor b value to apply the Master Theorem. For a recurrence like this, you could apply the generalization of the Master Theorem known as the Akra-Bazzi Method, with the parameters being:
a1=2, a2=1
b1=2/3, b2=1/3
g(n) = n
h1(n) = h2(n) = 0
Following the method, solve 2(2/3)^p + (1/3)^p = 1 for p, then evaluate the integral:
with g(u) = u (as g(n) = n) to determine the complexity class.
If you don't need the exact complexity class, but only want to derive a more simple upper-bound with the Master Theorem, you could upper-bound your recurrence relation for the run-time using the fact that 3T(2n/3) >= 2T(2n/3) + T(n/3):
T(n) <= 3T(2n/3) + O(n) if n >=3
T(n) = 1 otherwise
Then, you can solve this upper bound on the time-complexity with the Master Theorem, with a=3, b=3/2, and f(n)= n^c = n^1 to derive a Big-O (rather than Big-Theta) complexity class.
I need to prove t(n) is O(n!)
if t(n) = (n!)(n-1)
this is the formula I'm working with? any suggestions?
(n!)(n-1) <= c(n!)
I'm having a hard time proving this
would this formula work instead?
(n!)(n-1) <= c(n * n!)
It isn't O(n!). You have the right equation that would need to be true if n!(n-1) = O(n!):
n!(n-1) <= cn!
But then dividing both sides by n! gives:
n-1 <= c
There's no constant c that's greater than all positive integers, so you have a contradiction.
Does the following algorithm have a complexity of O(nlogn)?
The thing that confuses me is that this algorithm divides twice, not once as a regular O(nlogn) algorithm, and each time it does O(n) work.
def equivalent(a, b):
if isEqual(a, b):
return True
half = int(len(a) / 2)
if 2*half != len(a):
return False
if (equivalent(a[:half], b[:half]) and equivalent(a[half:], b[half:])):
return True
if (equivalent(a[:half], b[half:]) and equivalent(a[half:], b[:half])):
return True
return False
Each of the 4 recursive calls to equivalent reduces the amount of input data by a factor of 2. Thus, assuming that a and b have the same length, and isEqual has linear time complexity, we can construct the recurrence relation for the overall complexity:
Where C is some constant. We can solve this relation by repeatedly substituting and spotting a pattern:
What is the upper limit of the summation, m? The stopping condition occurs when len(a) is odd. That may be anywhere between N and 1, depending on the prime decomposition of N. In the worse case scenario, N is a power of 2, so the function recurses until len(a) = 1, i.e.
To enhance the above answer, there is a direct way to calculate with 'Master Method'. The master method works only for following type of recurrences.
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1
We have three cases based on the f(n) as below and reduction for them:
If f(n) = Θ(nc) where c < Logba then T(n) = Θ(n Logba)
If f(n) = Θ(nc) where c = Logba then T(n) = Θ(nc Log n)
If f(n) = Θ(nc) where c > Logba then T(n) = Θ(f(n)) = Θ(nc)
In your case,
we have a = 4, b = 2, c = 1 and c < Logba
i.e. 1 < log24
Hence => case 1
Therefore:
T(n) = Θ(nLogba)
T(n) = Θ(nLog24)
T(n) = Θ(n2)
More details with examples can be found in wiki.
Hope it helps!
Consider this code:
def count_7(lst):
if len(lst) == 1:
if lst[0] == 7:
return 1
else:
return 0
return count_7(lst[:len(lst)//2]) + count_7(lst[len(lst)//2:])
note: the slicing operations will be considered as O(1).
So, my inutation is telling me it's O(n*logn), but I'm struggling proving it scientifically.
Be glad for help!
Ok, mathematically (sort of ;) I get something like this:
T(n) = 2T(n/2) + c
T(1) = 1
Generalizing the equation:
T(n) = 2^k * T(n/2^k) + (2^k - 1) * c
T(1) = 1
n/2^k == 1 when k == logN so:
T(n) = 2^logN * T(1) + (2^logN - 1) * c
since T(1) = 1 and applying logarithmic properties
T(n) = n * 1 + (n-1) * c
T(n) = n + n * c
T(n) = n * (1+c)
T(n) = O(n)
A clue that this is not O(n*logn) is that you don't have to combine the two subproblems. Unlike mergesort, where you have to combine the two sub arrays, this algorithm doesn't have to do anything with the recursive result, so its time can be expressed as constant c.
UPDATE: Intuition behind
This algorithm should be O(n) because you visit each element in the array only once. It may not seem trivial because recursion never is.
For example, you divide the problem in two subproblems half the size, each subproblems is then divided in half the size too and will keep going on until each subproblem is of size 1. When you finish, you'll have n subproblems of size 1, which is n*O(1) = O(n).
The path from the beginning of first problem until N problems of size 1 is logarithmic, because in each step you subdivide in two. But in each step you do nothing with the result so this doesn't add any time complexity to the solution.
Hope this helps
The easiest way is to assume n is a multiple of 2 for simplicity: n = 2m
The time complexity of your algorithm is (c is a constant):
t(n) = 2 t(n/2) + c
And using recursion you get:
t(n) = 22 t(n/22) + 2c + c
...
= 2log(n) t(n/2log(n)) + c(2log(n)-1 + ... + 22 + 21 + 20)
Which can be simplified by noticing that log(n) = m, and thus 2log(n) = 2m = n.
= n + c(2log(n)-1 + ... + 22 + 21 + 20)
Finally, the sum above can be reduced to 2log(n) (which equals n)
t(n) = (1 + c) n
So your solution is O(n)
You scan all the element of the list once, that's O(n). The only difference with simple recursive scan
is the order in which you scan them. You do 1, n/2, 2, 3/4n etc... instead of 1,2,3 .... but the complexity is the same.