Estimate computational cost of a sorting algorithm - python

Today, my algorithm teacher gave me a little exercise in introducing computational cost. The code is as follows:
A = [8,7,6,5,4,3,2,1]
for y in range (0, len(A)):
el = A[y]
i = y-1
while i >= 0 and A[i] > el:
A[i+1] = A[i]
i = i-1
A[i+1] = el
Without wasting your time: it is an algorithm that takes an array and reorders it. I have to find out what order is O. Considering that all assignment operations use 1 as a cost, the "heaviest" lines are the for and the while. If the for loop is of the order of O (n) with n = len (A) I can't figure out how to calculate the while. Worst case it runs 28 times, but I can't find a correlation with the length of the array. Can someone help me? Many thanks in advance.

For the given input the condition A[i] > el will always be true when it gets evaluated, as every next value el is less than all preceding values A[i]. So the input really triggers a worst case execution.
Then the number if executions of the inner loop increases with one every time the outer loop makes an iteration:
0
1
2
3
...
n-1
The sum of all these is a triangular number: n(n-1)/2, which is O(n²)

The while loop inserts A[y] (for some y) in the subarray A[0..y-1] before it. As you can verify, when performed on increasing y, this makes A[0..y] sorted. The cost of the while loop is proportional to the distance between y and the insertion point. At best, this distance is always 1 (A[y] already in place); at worst, it is y (A[y] should come first, as on the given input); on average, y/2 for a uniform distribution.
Hence, globally, the sort is at best Θ(n), but at worst and on average Θ(n²). (Sum of integers up to n.)

Related

How to find the recurrence relation, and calculate Master Theorem of a Merge Sort Code?

I'm trying to find the Master Theorem of this Merge Sort Code, but first I need to find its recurrence relation, but I'm struggling to do and understand both. I already saw some similar questions here, but couldn't understand the explanations, like, first I need to find how many operations the code has? Could someone help me with that?
def mergeSort(alist):
print("Splitting ",alist)
if len(alist)>1:
mid = len(alist)//2
lefthalf = alist[:mid]
righthalf = alist[mid:]
mergeSort(lefthalf)
mergeSort(righthalf)
i=0
j=0
k=0
while i < len(lefthalf) and j < len(righthalf):
if lefthalf[i] < righthalf[j]:
alist[k]=lefthalf[i]
i=i+1
else:
alist[k]=righthalf[j]
j=j+1
k=k+1
while i < len(lefthalf):
alist[k]=lefthalf[i]
i=i+1
k=k+1
while j < len(righthalf):
alist[k]=righthalf[j]
j=j+1
k=k+1
print("Merging ",alist)
alist = [54,26,93,17,77,31,44,55,20]
mergeSort(alist)
print(alist)
To determine the run-time of a divide-and-conquer algorithm using the Master Theorem, you need to express the algorithm's run-time as a recursive function of input size, in the form:
T(n) = aT(n/b) + f(n)
T(n) is how we're expressing the total runtime of the algorithm on an input size n.
a stands for the number of recursive calls the algorithm makes.
T(n/b) represents the recursive calls: The n/b signifies that the input size to the recursive calls is some particular fraction of original input size (the divide part of divide-and-conquer).
f(n) represents the amount of work you need to do to in the main body of the algorithm, generally just to combine solutions from recursive calls into an overall solution (you could say this is the conquer part).
Here's a slightly re-factored definition of mergeSort:
def mergeSort(arr):
if len(arr) <= 1: return # array size 1 or 0 is already sorted
# split the array in half
mid = len(arr)//2
L = arr[:mid]
R = arr[mid:]
mergeSort(L) # sort left half
mergeSort(R) # sort right half
merge(L, R, arr) # merge sorted halves
We need to determine, a, n/b and f(n)
Because each call of mergeSort makes two recursive calls: mergeSort(L) and mergeSort(R), a=2:
T(n) = 2T(n/b) + f(n)
n/b represents the fraction of the current input that recursive calls are made with. Because we are finding the midpoint and splitting the input in half, passing one half the current array to each recursive call, n/b = n/2 and b=2. (if each recursive call instead got 1/4 of the original array b would be 4)
T(n) = 2T(n/2) + f(n)
f(n) represents all the work the algorithm does besides making recursive calls. Every time we call mergeSort, we calculate the midpoint in O(1) time.
We also split the array into L and R, and technically creating these two sub-array copies is O(n). Then, presuming mergeSort(L), sorted the left half of the array, and mergeSort(R) sorted the right half, we still have to merge the sorted sub-arrays together to sort the entire array with the merge function. Together, this makes f(n) = O(1) + O(n) + complexity of merge. Now let's take a look at merge:
def merge(L, R, arr):
i = j = k = 0 # 3 assignments
while i < len(L) and j < len(R): # 2 comparisons
if L[i] < R[j]: # 1 comparison, 2 array idx
arr[k] = L[i] # 1 assignment, 2 array idx
i += 1 # 1 assignment
else:
arr[k] = R[j] # 1 assignment, 2 array idx
j += 1 # 1 assignment
k += 1 # 1 assignment
while i < len(L): # 1 comparison
arr[k] = L[i] # 1 assignment, 2 array idx
i += 1 # 1 assignment
k += 1 # 1 assignment
while j < len(R): # 1 comparison
arr[k] = R[j] # 1 assignment, 2 array idx
j += 1 # 1 assignment
k += 1 # 1 assignment
This function has more going on, but we just need to get it's overall complexity class to be able to apply the Master Theorem accurately. We can count every single operation, that is, every comparison, array index, and assignment, or just reason about it more generally. Generally speaking, you can say that across the three while loops we are going to iterate through every member of L and R and assign them in order to the output array, arr, doing a constant amount of work for each element. Noting that we are processing every element of L and R (n total elements) and doing a constant amount of work for each element would be enough to say that merge is in O(n).
But, you can get more particular with counting operations if you want. For the first while loop, every iteration we make 3 comparisons, 5 array indexes, and 2 assignments (constant numbers), and the loop runs until one of L and R is fully processed. Then, one of the next two while loops may run to process any leftover elements from the other array, performing 1 comparison, 2 array indexes, and 3 variable assignments for each of those elements (constant work). Therefore, because each of the n total elements of L and R cause at most a constant number of operations to be performed across the while loops (either 10 or 6, by my count, so at most 10), and the i=j=k=0 statement is only 3 constant assignments, merge is in O(3 + 10*n) = O(n). Returning to the overall problem, this means:
f(n) = O(1) + O(n) + complexity of merge
= O(1) + O(n) + O(n)
= O(2n + 1)
= O(n)
T(n) = 2T(n/2) + n
One final step before we apply the Master Theorem: we want f(n) written as n^c. For f(n) = n = n^1, c=1. (Note: things change very slightly if f(n) = n^c*log^k(n) rather than simply n^c, but we don't need to worry about that here)
You can now apply the Master Theorem, which in its most basic form says to compare a (how quickly the number of recursive calls grows) to b^c (how quickly the amount of work per recursive call shrinks). There are 3 possible cases, the logic of which I try to explain, but you can ignore the parenthetical explanations if they aren't helpful:
a > b^c, T(n) = O(n^log_b(a)). (The total number of recursive calls is growing faster than the work per call is shrinking, so the total work is determined by the number of calls at the bottom level of the recursion tree. The number of calls starts at 1 and is multiplied by a log_b(n) times because log_b(n) is the depth of the recursion tree. Therefore, total work = a^log_b(n) = n^log_b(a))
a = b^c, T(n) = O(f(n)*log(n)). (The growth in number of calls is balanced by the decrease in work per call. The work at each level of the recursion tree is therefore constant, so total work is just f(n)*(depth of tree) = f(n)*log_b(n) = O(f(n)*log(n))
a < b^c, T(n) = O(f(n)). (The work per call shrinks faster than the number of calls increases. Total work is therefore dominated by the work at the top level of the recursion tree, which is just f(n))
For the case of mergeSort, we've seen that a = 2, b = 2, and c = 1. As a = b^c, we apply the 2nd case:
T(n) = O(f(n)*log(n)) = O(n*log(n))
And you're done. This may seem like a lot work, but coming up with a recurrence for T(n) gets easier the more you do it, and once you have a recurrence it's very quick to check which case it falls under, making the Master Theorem quite a useful tool for solving more complicated divide/conquer recurrences.

Finding minimum number of jumps increasing the value of the element

Optimizing a leetcode-style question - DP/DFS
The task is the following:
Given N heights, find the minimum number of suboptimal jumps required to go from start to end. [1-D Array]
A jump is suboptimal, if the height of the starting point i is less or equal to the height of the target point j.
A jump is possible, if j-i >= k, where k is the maximal jump distance.
For the first subtask, there is only one k value.
For the second subtask, there are two k values; output the amount of suboptimal jumps for each k value.
For the third subtask, there are 100 k values; output the amount of suboptimal jumps for each k value.
My Attempt
The following snippet is my shot at solving the problem, it gives the correct solution.
This was optimized to handle multiple k values without having to do a lot of unnecessary work.
The Problem is that even a solution with a single k value is o(n^2) in the worst case. (As k <= N)
A solution would be to eliminate the nested for loop, this is what I'm uncertain about how to approach it.
def solve(testcase):
N, Q = 10, 1
h = [1 , 2 , 4 ,2 , 8, 1, 2, 4, 8, 16] # output 3
# ^---- + ---^ 0 ^--- + --^ + ^
k = [3]
l_k = max(k)
distances = [99999999999] * N
distances[N-1] = 0
db = [ [0]*N for i in range(N)]
for i in range(N-2, -1, -1):
minLocalDistance = 99999999999
for j in range(min(i+l_k, N-1), i, -1):
minLocalDistance = min(minLocalDistance, distances[j] + (h[i] <= h[j]))
db[i][j] = distances[j] + (h[i] <= h[j])
distances[i] = minLocalDistance
print(f"Case #{testcase}: {distances[0]}")
NOTE: This is different from the classic min. jumps problem
Consider the best cost to get to a position i. It is the smaller of:
The minimum cost to get to any of the preceding k positions, plus one (a suboptimal jump); or
The minimum cost to get to any of the lower-height position in the same window (an optimal jump).
Case (1) can be handled with the sliding-window-minimum algorithm that you can find described, for example, here: Sliding window maximum in O(n) time. This takes amortized constant time per position, or O(N) all together.
Case (2) has a somewhat obvious solution with a BST: As the window moves, insert each new position into a BST sorted by height. Remove positions that are no longer in the window. Additionally, in each node, store the minimum cost within its subtree. With this structure, you can find the minimum cost for any height bound in O(log k) time.
The expense in case 2 leads to a total complexity of O(N log k) for a single k-value. That's not too bad for complexity, but such BSTs are somewhat complicated and aren't usually provided in standard libraries.
You can make this simpler and faster by recognizing that if the minimum cost in the window is C, then optimal jumps are only beneficial if they come from predecessors of cost C, because cost C+1 is attainable with a sub-optimal jump.
For each cost, then, you can use that same sliding-window-minimum algorithm to keep track of the minimum height in the window for nodes with that cost. Then for case (2), you just need to check to see if that minimum height for the minimum cost is lower than the height you want to jump to.
Maintaining these sliding windows again takes amortized constant time per operation, leading to O(N) time for the whole single-k-value algorithm.
I doubt that there would be any benefit in trying to manage multiple k-values at once.

Python - How to calculate this recursive function time complexity?

I wanted to solve the tower hopper problem in as much ways that I can and calculate each way's time complexity (just for self practice).
One of the solution is this:
def is_hopable(arr):
if len(arr) < 1 or arr[0] == 0:
return False
if arr[0] >= len(arr):
return True
res = False
for i in range(1,arr[0]+1):
res = res or is_hopable(arr[i:]) # This line
return res
I know the general idea of recursive time complexity calculation but I'm having trouble to analyze the commented line (inside the for loop). Usually I calculate the time complexity with T(n) = C + T(that line) and reduce it with a general expression (for example T(n-k)) until I reach the base case and can express k with n, but what is the time complexity of that for loop?
The complexity of that for loop could be up to O(n^2) because every iteration of the loop (up to n iterations) do a slice arr[i:] that return a copy of arr without first i elements O(n). With that in mind overall time is O(n^3).
Mentioned upper bound is tight.
Example: arr = [n-1, n-2, n-3, ..., 1, 1]
Alternative form: arr[i] = n - 1 - i for all i, 0 <= i < n - 1, and arr[n-1] = 1 where n is length of arr.
The recurrence to calculate amount of elemental operations (avoiding the use of constant) can be stated as:
Simplify summation:
Evaluate (unroll) lesser terms of T and search a lower bound:
Use formula of sum of squares from 1 to n:
As T(n) lower bound is a polynomial of degree 3 we have found that such instance of the problem running time is Ω(n^3) proving that the upper bound for the problem (O(n^3)) is tight.
Side note:
If you use as parameters original array and current index the runtime of for loop will be O(n) and overall time O(n^2).

How to calculate Time Complexity of this algorithm

I am new to the concept of asymptotic analysis. I am reading "Data Structures and Algorithms in Python" by Goodrich. In that book it has an implementation as follows:
def prefix average2(S):
”””Return list such that, for all j, A[j] equals average of S[0], ..., S[j].”””
n = len(S)
A = [0] n # create new list of n zeros
for j in range(n):
A[j] = sum(S[0:j+1]) / (j+1) # record the average
return A
The book says that this code runs in O(n^2) but I don't see how. S[0:j+1] runs in O(j+1) time but how do we know what time the 'sum()' runs in and how do we get the running time to be O(n^2)?
You iterate n times in the loop. In the first iteration, you sum 1 number (1 time step), then 2 (2 time steps), and so on, until you reach n (n time steps in this iteration, you have to visit each element once). Therefore, you have 1+2+...+(n-1)+n=(n*(n+1))/2 time steps. This is equal to (n^2+n)/2, or n^2+n after eliminating constants. The order of this term is 2, therefore your running time is O(n^2) (always take the highest power).
for j in range(n): # This loop runs n times.
A[j] = sum(S[0:j+1]) # now lets extend this sum function's implementation.
I'm not sure about the implementation of sum(iterable) function but it must be something like this.
def sum(iterable):
result=0
for item in iterable: # worse time complexity: n
result+=item
return result
so, finally, your prefix_average2 function will run n*n=n^2 time in worse case (When j+1=n)
First of all, I am not an expert on this topic, but I would like to share my opinion with you.
If the code is similar to the below:
for j in range(n):
A[j] += 5
Then we can say the complexity is O(n)
You may ask why did we skip the n=len(S), and A=[0]?
Because those variables take 0(1) time to complete the action.
If we return our case:
for j in range(n):
A[j] = sum(S[0:j+1]) ....
Here, sum(S[0:j+1]) there is also a loop of summation is calculated.
You can think this as:
for q in S:
S[q] += q # This is partially right
The important thing is two-for loop calculation is handling in that code.
for j in range(n):
for q in range(S)
A[j] = ....
Therefore, the complexity is O(n^2)
The For Loop (for j in range(n)) has n iterations:
Iteration(Operation)
1st iteration( 1 operation for summing first 1 element)
2nd iteration( 2 operations for summing first 2 elements)
3rd iteration( 3 operations for summing first 3 elements)
.
.
.
(n-1)th iteration( n-1 operations for summing first n-1 elements)
nth iteration( n operations for summing first n elements)
So, the total number of operation is the summation of (1 + 2 + 3 +......(n-1) + n)...
which is (n*(n+1))//2.
So the time complexity is O(n^2) as we have to (n(n+1))//2 operations.*

Python: How can I make my implementation of bubble sort more time efficient?

Here is my code - a bubble sort algorithm for sorting list elements in asc order:
foo = [7, 0, 3, 4, -1]
cnt = 0
for i in foo:
for i in range(len(foo)-1):
if foo[cnt] > foo[cnt + 1]:
temp = foo[cnt]
c[cnt] = c[cnt + 1]
c[cnt + 1] = temp
cnt = cnt + 1
cnt = 0
I've been revising my code, but it is still too inefficient for an online judge. Some help would be greatly appreciated!
Early Exit BubbleSort
The first loop has no bearing on what happens inside
The second loop does all the heavy lifting. You can get rid of count by using enumerate
To swap elements, use the pythonic swap - a, b = b, a.
As per this comment, make use of an early exit. If there are no swaps to be made at any point in the inner loop, that means the list is sorted, and no further iteration is necessary. This is the intuition behind changed.
By definition, after the ith iteration of the outer loop, the last i elements will have been sorted, so you can further reduce the constant factor associated with the algorithm.
foo = [7, 0, 3, 4, -1]
for i in range(len(foo)):
changed = False
for j, x in enumerate(foo[:-i-1]):
if x > foo[j + 1]:
foo[j], foo[j + 1] = foo[j + 1], foo[j]
changed = True
if not changed:
break
print(foo)
[-1, 0, 3, 4, 7]
Note that none of these optimisations change the asymptotic (Big-O) complexity of BubbleSort (which remains O(N ** 2)), instead, only reduces the constant factors associated.
One easy optimization is to start second loop from i+1 index:
for i in range(0, len(foo)):
for j in range(i+1, len(foo)):
if (foo[i] > foo[j]):
temp = foo[i]
foo[i] = foo[j]
foo[j] = temp
Since you already sorted everything up to index i there is no need to iterate over it again. This can save you more than 50% of comparisons - in this case it's 10 versus 25 in your original algorithm.
You need to understand the big Oh notation in order to understand how efficient your algorithm is in terms of usage of computational resources independent of computer architecture or clock rate. It basically helps you analyze the worst case running time or memory usage of your algorithm as the size of the input increases.
In summary, the running time of your algorithm will fall into one of these categories (from fastest to slowest);
O(1): Constant time. Pronounced (Oh of 1). The fastest time.
O(lg n): Logarithmic time. Pronounced (Oh of log n). Faster than linear time.
Traditionally, it is the fastest time bound for search.
O(n): Linear time. Pronounced (Oh of n, n is the size of your input e.g size of
an array). Usually something when you need to examine every single bit of
your input.
O(nlgn): The fastest time we can currently achieve when performing a sort on a
list of elements.
O(n**2): Oh of n squared. Quadratic time. Often this is the bound when we have
nested loops.
O(2**n): Really, REALLY big! A number raised to the power of n is slower than
n raised to any power.
In your case, you are using nested loops which is O(n2). The code i have written uses a single while loop and has a growth complexity of O(n) which is faster than O(n2). I haven't really tried it on a very large array but in your case it seems to work. Try it and let me know if it works as expected.
k = [7, 0, 3, 4, -1]
n = len(k)
i = 0
count = 0
while count < n**2: # assuming we wouldn't go through the loop more than n squared times
if i == n - 1:
i = 0
count += 1
swapped = False
elif k[i] > k[i+1]:
temp = k[i]
k[i] = k[i+1]
k[i+1] = temp
i+=1
swapped = True
elif swapped == False:
i += 1
elif swapped == True and i < n - 1:
i += 1
Note: In the example list (k), we only need to loop through the list three times in order for it to be sorted in ascending order. So if you change the while loop to this line of code while count < 4:, it would still work.

Categories

Resources