Can you please help me with how to calculate space and time complexity for below code
def isPalindrome(string):
# Write your code here.
string1=string[::-1]
if string1==string:
return True
else :
return False
The easiest way to find complexity is to go line by line and under each operation.
Let's start with the first line
string1=string[::-1]
This is a string slicing operation, which reverses the string and according to this, it takes time proportional to the number of characters which is being copied, in this case (your code) it is the whole string, hence it will be O(n)
This is just line 1. Let's move ahead
if string1==string:
here we are doing a string comparison, in the condition section of the if statement. according to this, it is again O(n) for line 2
now, the following lines are just return and else block which will be done in constant time i.e O(1)
hence for the total complexity, we just sum up all the line's complexity. i.e
O(n) + O(n) + O(1) + O(1)
you can refer to this to learn more about simplifying it.
So the final time complexity will be O(n)
This function can be broken down into complexity of its sub-processes. In calculating this time complexity, let the amount of characters in string be n (n = len(string) in Python terms). Now, let's look at the 2 sub-processes:
Traverse all characters in string in reverse order and assign to string1 (this is done by string1=string[::-1]) - O(n) linear time since there are n characters in string.
Compare if string1 == string - O(n) linear time because n characters in each string will be compared to each other, so that is n 1-to-1 comparisons.
Therefore, the total time complexity is O(n) + O(n) where n is len(string). In shorter terms, we simplify this to mean O(n) complexity.
The following two operations decides the time complexity of the above code:
With the below operation you are creating a copy of the list in the reversed manner which takes O(n) time and O(n) space:
string1 = string[::-1]
Checking the strings in the below line for equality again takes O(n) operations as you need to compare all the characters in the worst case:
if string1==string:
From the above, we can conclude the following:
Time complexity: O(n)
Space complexity: O(n)
where n represents the length of the input string.
You would also like go through this document which summarizes the time complexities for different operations in Python.
Related
I'm looking to iterate over every third element in my list. But in thinking about Big-O notation, would the Big-O complexity be O(n) where n is the number of elements in the list, or O(n/3) for every third element?
In other words, even if I specify that the list should only be iterated over every third element, is Python still looping through the entire list?
Example code:
def function(lst):
#iterating over every third list
for i in lst[2::3]:
pass
When using Big-O notation we ignore any scalar multiples out the front of the functions. This is because the algorithm still takes "linear time". We do this because Big-O notation considers the behaviour of a algorithm as it scales to large inputs.
Meaning it doesn't matter if the algorithm is considering every element of the list or every third element the time complexity still scales linearly to the input size. For example if the input size is doubled, it would take twice as long to execute, no matter if you are looking at every element or every third element.
Mathematically we can say this because of the M term in the definition (https://en.wikipedia.org/wiki/Big_O_notation):
abs(f(x)) <= M * O(f(x))
Big O notation would remain O(n) here.
Consider the following:
n = some big number
for i in range(n):
print(i)
print(i)
print(i)
Does doing 3 actions count as O(3n) or O(n)? O(n). Does the real world performance slow down by doing three actions instead of one? Absolutely!
Big O notation is about looking at the growth rate of the function, not about the physical runtime.
Consider the following from the pandas library:
# simple iteration O(n)
df = DataFrame([{a:4},{a:3},{a:2},{a:1}])
for row in df:
print(row["a"])
# iterrows iteration O(n)
for idx, row in df.iterrows():
print(row["a"])
# apply/lambda iteration O(n)
df.apply(lambda x: print(x["row"])
All of these implementations can be considered O(n) (constant is dropped), however that doesn't necessarily mean that the runtime will be the same. In fact, method 3 should be about 800 times faster than method 1 (https://towardsdatascience.com/how-to-make-your-pandas-loop-71-803-times-faster-805030df4f06)!
Another answer that may help you: Why is the constant always dropped from big O analysis?
What is the Time complexity of the function below? Is it O(n) or O(1)?
def find_words(grid, words):
return [find_word(grid, word) for word in words]
I am not entirely sure how multiple complexities are calculated in stations like these but I think this is O(n)*O(find_word), so if find_word is O(n) then the worst case scenario is O(n*n)? What I calculated might be hella wrong though since the n outside and the n inside are different, so something like O(n)*O(m) where m is the input size of the find_word.
Below I have provided the function to calculate the LCF (longest common prefix). I want to know the Big O time-complexity and space complexity. Can I say it is O(n)? Or do zip() and join() affect the time-complexity? I am wondering the space complexity is O(1). Please correct me if I am wrong. The input to the function is a list containing strings e.g., ["flower","flow","flight"].
def longestCommonPrefix(self, strs):
res = []
for x in zip(*strs):
if len(set(x)) == 1:
res.append(x[0])
else:
break
return "".join(res)
Iterating to get a single tuple value from zip(*strs) takes O(len(strs)) time and space. That's just the time it takes to allocate and fill a tuple of that length.
Iterating to consume the whole iterator takes O(len(strs) * min(len(s) for s in strs)) time, but shouldn't take any additional space over a single iteration.
Your iteration code is a bit trickier, because you may stop iterating early, when you find the first place within your strings where some characters don't match. In the worst case, all the strings are identical (up to the length of the shortest one) and so you'd use the time complexity above. And in the best case there is no common prefix, so you can use the single-value iteration as your best case.
But there's no good way to describe "average case" performance because it depends a lot on the distributions of the different inputs. If your inputs were random strings, you could do some statistics and predict an average number of iterations, but if your input strings are words, or even more likely, specific words expected to have common prefixes, then it's very likely that all bets are off.
Perhaps the best way to describe that part of the function's performance is actually in terms of its own output. It takes O(len(strs) * len(self.longestCommonPrefix(strs)) time to run.
As for str.join, running "".join(res) if we know nothing about res takes O(len(res) + len("".join(res))) for both time and space. Because your code only joins individual characters, the two lengths are going to be the same, so we can say that the join in your function takes O(len(self.longestCommonPrefix(strs))) time and space.
Putting things together, we can see that the main loop takes a multiple of the time taken by the join call, so we can ignore the latter and say that the function's time complexity is just O(len(strs) * len(self.longestCommonPrefix(strs)). However, the memory usage complexities for the two parts are independent and we can't easily predict if the number of strings or the length of the output will grow faster. So we need to combine them and say that you need O(len(strs) + len(self.longestCommonPrefix(strs))) space.
Time:
Your code is O(n * m), where n is the lenght of the list and m is the lenght of the biggest string in the list.
zip() is O(1) in python 3.x. The function allocates a special iterable (called the zip object), and assigns the parameter array to an internal field. In case of zip(*x) (pointed from #juanpa.arrivillaga), it builds a tuple, so it is O(n). As a result, you will get an O(n) because you iterate over the list (tuple) plus the zip(*x) call staying at the end with O(n).
join() is O(n). Where n is the total length of the input.
set() is O(m). Where m is the total lenght of the word.
Space:
It is O(n), because in the worst scenario, res will need to append x[0] n times.
Can anyone advise the space and time complexity of the below code?
I know that the time complexity should be O(n) because the function is called n times, and the space complexity is at least O(n) (because of stack space), but does passing a[1:] to function result in an increase in the space complexity? I think a[1:] will create a new copy of a while omitting the first element, is that right?
def sum(a):
if len(a) == 1:
return a[0]
return a[0] + sum(a[1:])
As a recursive function, if no tail-call optimizations are applied, it will certainly have a space complexity of at least O(n) in this case, considering its execution on the memory stack. But let us analyze it further:
Time complexity
We know that sum is recursive and its stop criteria is when the input array is single length. So we know that Sum will be called at least O(n) times in the worst-case scenario, considering an input array of size n. Consider the recursion for what it is, i. e., a loop.
Inside the function, however, we have a slice operation. The slice operation l[a:b] is O(b-a), so this operation will have a complexity of O(n-1) in the first run, O(n-2) in the second run, and so on. We consider that primitively, it will perform a copy of the whole array. The overall time complexity for this function should be O(n^2) because it creates a slice per item in an array of size n.
Space complexity
Now talking about space in memory.
len(a) == 1
Here we have one copy from the return value of len(a).
return a[0]
&
return a[0] + sum(a[1:])
In the both lines above we'll have another copy of a value that will be stored into the return address of the function. The slice also has a O(n) space complexity.
Seeing this, and considering no major-breaking optimizations are being applied by the compiler, such as a reduction, we say that the space complexity of this function is O(n) because it will make a constant number of copies for each input AND will perform a slice operation in a array of size n.
Since we said in the beginning that recursion is like a loop, considering no tail-call optimizations, this whole function will be performed n times in the worst-case scenario. The program will increase the function's memory stack until it reaches the stop criteria, until it can finally 'pop' return values from the stack of calls. The total space complexity is thus O(n*log n) as well (because every execution the input array is smaller).
Ps:
I also considered len(a) to have an O(1) time complexity, according to this.
The time complexity is something like theta(n^2) because each time you do a[i:] you basically copy the list from i to the end, so you have to iterate through it. As for space complexity, the app stack will have all of your lists that you will call, first a list with n elements, then n-1 and so on until 1, where you will start emptying the stack. So you will end up with a theta(n^2) complexity for that too.
I've worked up the following code which finds anagrams. I had thought the big O notation for this was O(n) But was informed by my instructor that I am incorrect. I am confused on why this is not correct however, would anyone be able to offer any advice?
# Define an anagram.
def anagram(s1, s2):
return sorted(s1) == sorted(s2)
# Main function.
def Question1(t, s):
# use built in any function to check any anagram of t is substring of s
return any(anagram(s[i: i+len(t)], t)
for i in range(len(s)-len(t)+ 1))
Function Call:
# Simple test case.
print Question1("app", "paple")
# True
any anagram of t is substring of s
That's not what your code says.
You have "any substring of s is an anagram of t", which might be equivalent, but it's easier to understand that way.
As for complexity, you need to define what you're calling N... Is it len(s)-len(t)+ 1?
The function any() has complexity N, in that case, yes.
However, you've additionally called anagram over an input of T length, and you seem to have ignored that.
anagram calls sorted twice. Each call to sorted is closer to O(T * log(T)) itself assuming merge sort. You're also performing a list slice, so it could be slightly higher.
Let's say your complexity is somewhere on the order of (S-T) * 2 * (T * log(T)) where T and S are lengths of strings.
The answer depends on which string of your input is larger.
Best case is that they are the same length because then your range only has one element.
Big O notation is worst case, though, so you need to figure out which conditions generate the most complexity in terms of total operations. For example, what if T > S? Then len(s)-len(t)+ 1 will be non positive, so does the code run more or less than equal length strings? And what about S < T or S = 0?
This is not N complexity due a few factors. First one sorted has O(n log n) complexity. And Potentially you can call it few times (and sort T and S), if T long enough.