I'm farily new to time complexity. I'm looking for the time complexity of this code
def func(arg):
list= []
for i in range(len(arg):
list.append(arg.count(i)
return list
I know that the loop would make it O(n), but then count is also O(n) in python, would that make this function O(n) or O(n2)?
You have a loop within a loop:
for i in range(len(arg)): # outer loop => O(n)
arg.count(i) # inner loop hidden inside a function => O(n)
So that's O(n^2).
If you wanted two loops that sum to O(n), you'd need something like this:
for x in range(N): # O(N)
... # do stuff
for y in range(N): # O(N)
... # do other stuff
The overall complexity will be the sum of the loops' complexities, so
O(N) + O(N) = O(2 * N) ~= O(N)
O(n^2).
The outer loop executes n times the inner statement(which is O(n)) so we get quadratic complexity.
Related
The time complexity of a for loop with n as the input is O(n) from what I've understood till now but what about the code inside the loop?
while var in arr:
arr.remove(var)
arr is a list with n elements and var can be a string or a number.
How do I know if I should multiply or add time complexities? Is the time complexity of the above code O(n**2) or O(n)?
for i in range(n):
arr.remove(var)
arr.remove(var1)
What would the time complexity be now? What should I add or multiply?
I tried learning about time complexity but couldn't understand how to deal with code having more than one time complexity.
You need to know the time complexity of the content inside the loop.
for i in arr: # O(n)
print(sum(arr) - i) # O(n)
In this case, the .pop(0) is nested in the forloop, so you need to multiply the complexity to the forloop complexity: O(n) * O(n) > O(n*n) > O(n²).
for i in arr: # O(n)
print(sum(arr) - i) # O(n)
print(sum(arr) - i) # O(n)
In this case, it's
O(n) * (O(n) + O(n))
O(n) * O(n+n)
O(n) * O(2n)
O(n) * O(n)
O(n*n)
O(n²)
See When to add and when to multiply to find time complexity for more information about that.
For a while loop, it doesn't change anything: multiply content with the complexity of the while.
I'm calculating time complexities of algorithms and I assumed both code below to have time complexities of O(n^2)
However my books says first code is O(n^2) and second one is O(n). I don't understand why. Both are using min/max, so whats the difference?
Code 1:
def sum(l, n):
for i in range(1, n - 1):
x = min(l[0:i])
y = min(l[i:num])
return x+y
Code 2:
def sum(a, n):
r = [0] * n
l = [0] * n
min_el = a[0]
for i in range(n):
min_el = min(min_el, a[i])
l[i] = min_el
print(min_el)
In the first block of code the block of code runs the min function over the whole array, which takes O(n) time. Now considering it is in a loop of length n then the total time is O(n^2)
Looking at the 2nd block of code. Note that the min function is only comparing 2 values, which is arguably O(1). Now considering that it is in a loop of length n. The total time is simply the summation of O(n+n+n), which is equal to O(n)
In the first code, it gives an array to the min() function, and this O(n) time complexity because it checks all elements in the array, in the second code, min() functions only compare two values and it takes O(1)
Let's say we have the following code.
def problem(n):
list = []
for i in range(n):
list.append(i)
length = len(list)
return list
The program has time complexity of O(n) if we don't calculate len(list). But if we do, will the time complexity be O(n * log(n)) or O(n^2)? .
No, the len() function has constant time in python and it is not dependent on the length of the element, your time complexity for the above code would remain O(N) governed by your for i in range(n) loop. Here is the time complexity for many CPython functions, like len()! (Get Length in table)
def myFunction(mylist):
n = len(mylist)
p = []
sum = 0
for x in mylist:
if n > 100:
sum = sum + x
else:
for y in mylist:
p.append(y)
My thought process was that if the else statement were to be executed, the operations within are O(n) because the number of times through depends on the length of the list. Similarly, I understood the first loop to be O(n) as well thus making the entire worst-case complexity O(n^2).
Apparently the correct answer is O(n). Any explanation would be greatly appreciated :)
Just to add a bit, we typically think of Big-O complexity being in the case where n gets large. Thus, as n gets large, we won't execute the second statement. Thus it would just be O(n)
I have a question about iterating through a list in python.
Let's say I have lists A = [1, 2, 3, 4] and B = []. What is the difference (if any) between using these two cycles? I'm intrested in the time complexity.
for i in range(len(A)):
B.append(A[i])
for i in A:
B.append(i)
The time-complexity is identical for both of those operations loops.
Think about it this way:
How many iterations will they have to do?
They'll both have to do len(A) number of loops. So therefore, they will take the same length of time.
Another way that this may be written is O(n). This is an example of Big-O-Notation and just means that the time-complexity is linear - i.e both of the operations will take the same amount of time longer if the list goes from being length 5 --> 10 as it would if the list went from being length 1000 --> 1005.
--
The other time-complexities can be seen clearly in the following grap which was stolen from this great explanation in another answer:
According to this question/answer, len(A) has a time-complexity of O(1), so it doesn't increase the complexity of the first loop that you mentioned. Both possibilities have to do n cycles, where n is the length of A.
All in all, both possibilities have a time-complexity of O(n).
Each loop is O(n), or linear time:
for i in range(len(A)):
B.append(A[i])
for i in A:
B.append(i)
Each append operation is O(1), and the indexing occurring at B.append(A[i]) is also O(1). Thus, the overall time complexity for this code block is:
T(N) = O(n) + O(n) = 2*O(n) => O(n)
since Big - O measures worst case.