I'm calculating time complexities of algorithms and I assumed both code below to have time complexities of O(n^2)
However my books says first code is O(n^2) and second one is O(n). I don't understand why. Both are using min/max, so whats the difference?
Code 1:
def sum(l, n):
for i in range(1, n - 1):
x = min(l[0:i])
y = min(l[i:num])
return x+y
Code 2:
def sum(a, n):
r = [0] * n
l = [0] * n
min_el = a[0]
for i in range(n):
min_el = min(min_el, a[i])
l[i] = min_el
print(min_el)
In the first block of code the block of code runs the min function over the whole array, which takes O(n) time. Now considering it is in a loop of length n then the total time is O(n^2)
Looking at the 2nd block of code. Note that the min function is only comparing 2 values, which is arguably O(1). Now considering that it is in a loop of length n. The total time is simply the summation of O(n+n+n), which is equal to O(n)
In the first code, it gives an array to the min() function, and this O(n) time complexity because it checks all elements in the array, in the second code, min() functions only compare two values and it takes O(1)
Related
I wanted to solve the tower hopper problem in as much ways that I can and calculate each way's time complexity (just for self practice).
One of the solution is this:
def is_hopable(arr):
if len(arr) < 1 or arr[0] == 0:
return False
if arr[0] >= len(arr):
return True
res = False
for i in range(1,arr[0]+1):
res = res or is_hopable(arr[i:]) # This line
return res
I know the general idea of recursive time complexity calculation but I'm having trouble to analyze the commented line (inside the for loop). Usually I calculate the time complexity with T(n) = C + T(that line) and reduce it with a general expression (for example T(n-k)) until I reach the base case and can express k with n, but what is the time complexity of that for loop?
The complexity of that for loop could be up to O(n^2) because every iteration of the loop (up to n iterations) do a slice arr[i:] that return a copy of arr without first i elements O(n). With that in mind overall time is O(n^3).
Mentioned upper bound is tight.
Example: arr = [n-1, n-2, n-3, ..., 1, 1]
Alternative form: arr[i] = n - 1 - i for all i, 0 <= i < n - 1, and arr[n-1] = 1 where n is length of arr.
The recurrence to calculate amount of elemental operations (avoiding the use of constant) can be stated as:
Simplify summation:
Evaluate (unroll) lesser terms of T and search a lower bound:
Use formula of sum of squares from 1 to n:
As T(n) lower bound is a polynomial of degree 3 we have found that such instance of the problem running time is Ω(n^3) proving that the upper bound for the problem (O(n^3)) is tight.
Side note:
If you use as parameters original array and current index the runtime of for loop will be O(n) and overall time O(n^2).
I am new to the concept of asymptotic analysis. I am reading "Data Structures and Algorithms in Python" by Goodrich. In that book it has an implementation as follows:
def prefix average2(S):
”””Return list such that, for all j, A[j] equals average of S[0], ..., S[j].”””
n = len(S)
A = [0] n # create new list of n zeros
for j in range(n):
A[j] = sum(S[0:j+1]) / (j+1) # record the average
return A
The book says that this code runs in O(n^2) but I don't see how. S[0:j+1] runs in O(j+1) time but how do we know what time the 'sum()' runs in and how do we get the running time to be O(n^2)?
You iterate n times in the loop. In the first iteration, you sum 1 number (1 time step), then 2 (2 time steps), and so on, until you reach n (n time steps in this iteration, you have to visit each element once). Therefore, you have 1+2+...+(n-1)+n=(n*(n+1))/2 time steps. This is equal to (n^2+n)/2, or n^2+n after eliminating constants. The order of this term is 2, therefore your running time is O(n^2) (always take the highest power).
for j in range(n): # This loop runs n times.
A[j] = sum(S[0:j+1]) # now lets extend this sum function's implementation.
I'm not sure about the implementation of sum(iterable) function but it must be something like this.
def sum(iterable):
result=0
for item in iterable: # worse time complexity: n
result+=item
return result
so, finally, your prefix_average2 function will run n*n=n^2 time in worse case (When j+1=n)
First of all, I am not an expert on this topic, but I would like to share my opinion with you.
If the code is similar to the below:
for j in range(n):
A[j] += 5
Then we can say the complexity is O(n)
You may ask why did we skip the n=len(S), and A=[0]?
Because those variables take 0(1) time to complete the action.
If we return our case:
for j in range(n):
A[j] = sum(S[0:j+1]) ....
Here, sum(S[0:j+1]) there is also a loop of summation is calculated.
You can think this as:
for q in S:
S[q] += q # This is partially right
The important thing is two-for loop calculation is handling in that code.
for j in range(n):
for q in range(S)
A[j] = ....
Therefore, the complexity is O(n^2)
The For Loop (for j in range(n)) has n iterations:
Iteration(Operation)
1st iteration( 1 operation for summing first 1 element)
2nd iteration( 2 operations for summing first 2 elements)
3rd iteration( 3 operations for summing first 3 elements)
.
.
.
(n-1)th iteration( n-1 operations for summing first n-1 elements)
nth iteration( n operations for summing first n elements)
So, the total number of operation is the summation of (1 + 2 + 3 +......(n-1) + n)...
which is (n*(n+1))//2.
So the time complexity is O(n^2) as we have to (n(n+1))//2 operations.*
from linkedlist import LinkedList
def find_max(linked_list): # Complexity: O(N)
current = linked_list.get_head_node()
maximum = current.get_value()
while current.get_next_node():
current = current.get_next_node()
val = current.get_value()
if val > maximum:
maximum = val
return maximum
def sort_linked_list(linked_list): # <----- WHAT IS THE COMPLEXITY OF THIS FUNCTION?
print("\n---------------------------")
print("The original linked list is:\n{0}".format(linked_list.stringify_list()))
new_linked_list = LinkedList()
while linked_list.head_node:
max_value = find_max(linked_list)
print(max_value)
new_linked_list.insert_beginning(max_value)
linked_list.remove_node(max_value)
return new_linked_list
Since we loop through the while loop N times, the runtime is at least N. For each loop we call find_max, HOWEVER, for each call to find_max, the linked_list we are parsing to the find_max is reduced by one element. Based on that, isn't the runtime N log N?
Or is it N^2?
It's still O(n²); the reduction in size by 1 each time just makes the effective work n * n / 2 (because on average, you have to deal with half the original length on each pass, and you're still doing n passes). But since constant factors aren't included in big-O notation, that simplifies to just O(n²).
For it to be O(n log n), each step would have to halve the size of the list to scan, not simply reduce it by one.
It's n + n-1 + n-2 + ... + 1 which is arithmetic sequence so it is n(n+1)/2. So in big O notation it is O(n^2).
Don't forget, O-notation deals in terms of worst-case complexity, and describes an entire class of functions. As far as O-notation goes, the following two functions are the same complexity:
64x^2 + 128x + 256 --> O(n^2)
x^2 - 2x + 1 --> O(n^2)
In your case (and your algorithm what's called a selection sort, picking the best element in the list and putting it in the new list; other O(n^2) sorts include insertion sort and bubble sort), you have the following complexities:
0th iteration: n
1st iteration: n-1
2nd iteration: n-2
...
nth iteration: 1
So the entire complexity would be
n + (n-1) + (n-2) + ... + 1 = n(n+1)/2 = 1/2n^2 + 1/2n
which is still O(n^2), though it'd be on the low side of that class.
Let's say we have the following code.
def problem(n):
list = []
for i in range(n):
list.append(i)
length = len(list)
return list
The program has time complexity of O(n) if we don't calculate len(list). But if we do, will the time complexity be O(n * log(n)) or O(n^2)? .
No, the len() function has constant time in python and it is not dependent on the length of the element, your time complexity for the above code would remain O(N) governed by your for i in range(n) loop. Here is the time complexity for many CPython functions, like len()! (Get Length in table)
def myFunction(mylist):
n = len(mylist)
p = []
sum = 0
for x in mylist:
if n > 100:
sum = sum + x
else:
for y in mylist:
p.append(y)
My thought process was that if the else statement were to be executed, the operations within are O(n) because the number of times through depends on the length of the list. Similarly, I understood the first loop to be O(n) as well thus making the entire worst-case complexity O(n^2).
Apparently the correct answer is O(n). Any explanation would be greatly appreciated :)
Just to add a bit, we typically think of Big-O complexity being in the case where n gets large. Thus, as n gets large, we won't execute the second statement. Thus it would just be O(n)