Quicksort RecursionError - python

I wrote a code about Quick sort and the problem is, I get a Recursion Error for large Arrays with, for Example, over 1000 values. However, I don't have the problem with small Arrays. I don't know why I get the Error. Can someone help me?
My Code:
last = object()
def quicksort(array, start= 0, ende = last):
if ende is last:
ende = (len(array) -1)
def partition(array, anfang, ende):
piv_index = anfang
piv = array[piv_index]
while anfang < ende:
while anfang < len(array) and array[anfang] <= piv:
anfang += 1
while array[ende] > piv:
ende -= 1
if anfang < ende:
array[anfang], array[ende] = array[ende], array[anfang]
array[ende], array[piv_index] = array[piv_index], array[ende]
return ende
if start < ende:
p = partition(array, start, ende)
quicksort(array, start, p-1)
quicksort(array, p+1, ende)
return(array)

As mentioned in the comments by Barmar there is a recursion limit in Python which seems to be 1000.
You can see the actual value on your system with the function getrecursionlimit(), which on my system is 1000 (Python 3).
You can set the recursion limit higher if you need with getrecursionlimit(imit).
You say that you run into the recursion error when testing with 1000 numbers. You must be testing a strictly descending ordered list, e.g. 1000, 999, 998, ..., 1 which will hit the limit.
Actually with the recursion limit set to 1000
and such an input list with 999 items
and your program choice of the pivot
you will hit the limit.
With 998 numbers you'll be ok.
Choosing a higher recursion limit may make sense if you know the maximum size of the array. Otherwise you have to consider a possible recursion error.
What you could do apart from choosing another suitable sort algorithm:
set the recursion limit to suit your needs if possible
choose a pivot randomly/accordingly (there are several options here)
rewrite the Quicksort program so that it is iterative (that's more difficult for some, but spares you the recursion depth problem)
The options have their pros and cons of course, so you have to make a compromise (as I hear quite often the saying you have to die one of the deaths).

To avoid stack overflow, stack space can be limited to O(log2(n)) by recursing on smaller partition, and looping for larger partition. Worst case time complexity remains O(n^2). Also the code is a variation of Hoare partition scheme, which needs some clean up. Example code that uses middle element for pivot with partition logic included in the quick sort function.
def qsort(a, lo, hi): # hoare, post inc, dec
while(lo < hi):
p = a[(lo + hi) // 2] # pivot
i = lo
j = hi
while(i <= j):
while(a[i] < p):
i += 1
while(a[j] > p):
j -= 1
if(i > j):
break
a[i],a[j] = a[j],a[i]
i += 1
j -= 1
# recurse on smaller part, loop on larger part
if((j - lo) <= (hi - i)):
qsort(a, lo, j)
lo = i
else:
qsort(a, i, hi)
hi = j

Related

How to find the recurrence relation, and calculate Master Theorem of a Merge Sort Code?

I'm trying to find the Master Theorem of this Merge Sort Code, but first I need to find its recurrence relation, but I'm struggling to do and understand both. I already saw some similar questions here, but couldn't understand the explanations, like, first I need to find how many operations the code has? Could someone help me with that?
def mergeSort(alist):
print("Splitting ",alist)
if len(alist)>1:
mid = len(alist)//2
lefthalf = alist[:mid]
righthalf = alist[mid:]
mergeSort(lefthalf)
mergeSort(righthalf)
i=0
j=0
k=0
while i < len(lefthalf) and j < len(righthalf):
if lefthalf[i] < righthalf[j]:
alist[k]=lefthalf[i]
i=i+1
else:
alist[k]=righthalf[j]
j=j+1
k=k+1
while i < len(lefthalf):
alist[k]=lefthalf[i]
i=i+1
k=k+1
while j < len(righthalf):
alist[k]=righthalf[j]
j=j+1
k=k+1
print("Merging ",alist)
alist = [54,26,93,17,77,31,44,55,20]
mergeSort(alist)
print(alist)
To determine the run-time of a divide-and-conquer algorithm using the Master Theorem, you need to express the algorithm's run-time as a recursive function of input size, in the form:
T(n) = aT(n/b) + f(n)
T(n) is how we're expressing the total runtime of the algorithm on an input size n.
a stands for the number of recursive calls the algorithm makes.
T(n/b) represents the recursive calls: The n/b signifies that the input size to the recursive calls is some particular fraction of original input size (the divide part of divide-and-conquer).
f(n) represents the amount of work you need to do to in the main body of the algorithm, generally just to combine solutions from recursive calls into an overall solution (you could say this is the conquer part).
Here's a slightly re-factored definition of mergeSort:
def mergeSort(arr):
if len(arr) <= 1: return # array size 1 or 0 is already sorted
# split the array in half
mid = len(arr)//2
L = arr[:mid]
R = arr[mid:]
mergeSort(L) # sort left half
mergeSort(R) # sort right half
merge(L, R, arr) # merge sorted halves
We need to determine, a, n/b and f(n)
Because each call of mergeSort makes two recursive calls: mergeSort(L) and mergeSort(R), a=2:
T(n) = 2T(n/b) + f(n)
n/b represents the fraction of the current input that recursive calls are made with. Because we are finding the midpoint and splitting the input in half, passing one half the current array to each recursive call, n/b = n/2 and b=2. (if each recursive call instead got 1/4 of the original array b would be 4)
T(n) = 2T(n/2) + f(n)
f(n) represents all the work the algorithm does besides making recursive calls. Every time we call mergeSort, we calculate the midpoint in O(1) time.
We also split the array into L and R, and technically creating these two sub-array copies is O(n). Then, presuming mergeSort(L), sorted the left half of the array, and mergeSort(R) sorted the right half, we still have to merge the sorted sub-arrays together to sort the entire array with the merge function. Together, this makes f(n) = O(1) + O(n) + complexity of merge. Now let's take a look at merge:
def merge(L, R, arr):
i = j = k = 0 # 3 assignments
while i < len(L) and j < len(R): # 2 comparisons
if L[i] < R[j]: # 1 comparison, 2 array idx
arr[k] = L[i] # 1 assignment, 2 array idx
i += 1 # 1 assignment
else:
arr[k] = R[j] # 1 assignment, 2 array idx
j += 1 # 1 assignment
k += 1 # 1 assignment
while i < len(L): # 1 comparison
arr[k] = L[i] # 1 assignment, 2 array idx
i += 1 # 1 assignment
k += 1 # 1 assignment
while j < len(R): # 1 comparison
arr[k] = R[j] # 1 assignment, 2 array idx
j += 1 # 1 assignment
k += 1 # 1 assignment
This function has more going on, but we just need to get it's overall complexity class to be able to apply the Master Theorem accurately. We can count every single operation, that is, every comparison, array index, and assignment, or just reason about it more generally. Generally speaking, you can say that across the three while loops we are going to iterate through every member of L and R and assign them in order to the output array, arr, doing a constant amount of work for each element. Noting that we are processing every element of L and R (n total elements) and doing a constant amount of work for each element would be enough to say that merge is in O(n).
But, you can get more particular with counting operations if you want. For the first while loop, every iteration we make 3 comparisons, 5 array indexes, and 2 assignments (constant numbers), and the loop runs until one of L and R is fully processed. Then, one of the next two while loops may run to process any leftover elements from the other array, performing 1 comparison, 2 array indexes, and 3 variable assignments for each of those elements (constant work). Therefore, because each of the n total elements of L and R cause at most a constant number of operations to be performed across the while loops (either 10 or 6, by my count, so at most 10), and the i=j=k=0 statement is only 3 constant assignments, merge is in O(3 + 10*n) = O(n). Returning to the overall problem, this means:
f(n) = O(1) + O(n) + complexity of merge
= O(1) + O(n) + O(n)
= O(2n + 1)
= O(n)
T(n) = 2T(n/2) + n
One final step before we apply the Master Theorem: we want f(n) written as n^c. For f(n) = n = n^1, c=1. (Note: things change very slightly if f(n) = n^c*log^k(n) rather than simply n^c, but we don't need to worry about that here)
You can now apply the Master Theorem, which in its most basic form says to compare a (how quickly the number of recursive calls grows) to b^c (how quickly the amount of work per recursive call shrinks). There are 3 possible cases, the logic of which I try to explain, but you can ignore the parenthetical explanations if they aren't helpful:
a > b^c, T(n) = O(n^log_b(a)). (The total number of recursive calls is growing faster than the work per call is shrinking, so the total work is determined by the number of calls at the bottom level of the recursion tree. The number of calls starts at 1 and is multiplied by a log_b(n) times because log_b(n) is the depth of the recursion tree. Therefore, total work = a^log_b(n) = n^log_b(a))
a = b^c, T(n) = O(f(n)*log(n)). (The growth in number of calls is balanced by the decrease in work per call. The work at each level of the recursion tree is therefore constant, so total work is just f(n)*(depth of tree) = f(n)*log_b(n) = O(f(n)*log(n))
a < b^c, T(n) = O(f(n)). (The work per call shrinks faster than the number of calls increases. Total work is therefore dominated by the work at the top level of the recursion tree, which is just f(n))
For the case of mergeSort, we've seen that a = 2, b = 2, and c = 1. As a = b^c, we apply the 2nd case:
T(n) = O(f(n)*log(n)) = O(n*log(n))
And you're done. This may seem like a lot work, but coming up with a recurrence for T(n) gets easier the more you do it, and once you have a recurrence it's very quick to check which case it falls under, making the Master Theorem quite a useful tool for solving more complicated divide/conquer recurrences.

Python merge sort + insertion sort hybrid Tim sort

I have already made code for insertion sort and merge sort. Now I want to implement my insertion sort and merge sort into a Tim sort.
I can see that the example of Tim sort uses start, mid and end inputs but it should be possible to do this without no?
I would like to keep my merge and insertion sorts as they are if possible because of input outputs fits well with the rest of my code.
from random import randint
minrun = 32
def insertion_sort(in_data):
s_data = list(in_data)
for i in range(1, len(s_data)):
key = s_data[i]
j = i - 1
while j >= 0 and key < s_data[j]:
s_data[j + 1] = s_data[j]
j -= 1
s_data[j + 1] = key
return s_data
def merge(a, b):
c = []
a_idx, b_idx = 0, 0
while a_idx < len(a) and b_idx < len(b):
if a[a_idx] < b[b_idx]:
c.append(a[a_idx])
a_idx += 1
else:
c.append(b[b_idx])
b_idx += 1
if a_idx == len(a):
c.extend(b[b_idx:])
else:
c.extend(a[a_idx:])
return c
def merge_sort(a):
if len(a) <= 1:
return a
left, right = merge_sort(a[:len(a) // 2]), merge_sort(a[len(a) // 2:])
return merge(left, right)
def tim_sort(in_data):
n = len(in_data)
for start in range(0, n, minrun):
end = min(start + minrun - 1, n - 1)
in_data = insertion_sort(in_data, start, end)
curr_size = minrun
while curr_size < n:
for start in range(0, n, curr_size * 2):
mid = min(n - 1, start + curr_size - 1)
end = min(n - 1, mid + curr_size)
in_data = merge_sort(in_data, start, mid, end)
curr_size *= 2
return in_data
def create_array(size=10, max=50):
from random import randint
return [randint(0, max) for _ in range(size)]
I found this example of Tim sort but I struggle with how to make it work within my code.
27 November 2020
It is not clear where you obtained this example, but it certainly is not timsort. If you want to implement timsort, the first thing to do is read and understand Tim Peters description of the algorithm:
https://svn.python.org/projects/python/trunk/Objects/listsort.txt
This is the definitive document regarding timsort. You can find all kinds of rubbish with google. The only reference I have ever found that might be worth reading to get your feet wet is:
https://www.infopulse.com/blog/timsort-sorting-algorithm
which is lightweight, but fairly complete and not seriously incorrect in any way. It does, however, omit any consideration of galloping, which is the trickiest part of the algorithm.
It is important to realize python is a dynamic language so dumbly implementing timsort will produce something that uses large excesses of memory due to internal object allocation. Timsort requires:
sorting in-place
stability
maintaining an invariant
galloping
thorough testing
Sorting in-place in python implies indexing the data list manually. If you use slices, you allocate and dispose of memory each time.
There are three python implementations on the web that are worth looking at for guidance that I am aware of:
1 https://github.com/reingart/pypy/blob/master/rpython/rlib/listsort.py
2 https://gist.github.com/ruminations/89a045dc0ef7edfb92304a0de0752ee0
3 https://github.com/hu-ng/timsort
The first is part of the pypy trunk and is implemented in rpython. It appears to be an adaptation of the cpython implementation. rpython is a restricted subset of python intended for static compilation.
The second is a well tested implementation that is well documented and fairly readable. The last is apparently a university exercise, and appears to be complete and correct, but not well tested.
You can find dozens of other attempts at python implementation of timsort but all I have seen either fail to fulfill basic requirements or are clearly incorrect.
Finally, if you expect someone to help you adapt your code, you should at least link to it, but better to provide it directly, for neither mergesort nor insort are difficult to code.

QuickSort works slow for low range data

lately im comparing different types of sort algorithms in python. I noticed that my quicksort isnt handling well inputs where values are repeated.
def compare_asc(a, b):
return a <= b
def partition(a, p, r, compare):
pivot = a[r]
i = p-1
for j in range(p, r):
if compare(a[j], pivot):
i += 1
a[i], a[j] = a[j], a[i]
a[i+1], a[r] = a[r], a[i+1]
return i + 1
def part_quick_sort(a, p, r, compare):
if p < r:
q = partition(a, p, r, compare)
part_quick_sort(a, p, q-1, compare)
part_quick_sort(a, q+1, r, compare)
def quick_sort(a, compare):
part_quick_sort(a, 0, len(a)-1, compare)
return a
Then I test this
import numpy as np
from timeit import default_timer as timer
import sys
test_list1 = np.random.randint(-10000, 10000, size=10000).tolist()
start = timer()
test_list1 = quick_sort(test_list1, compare_asc)
elapsed = timer() - start
print(elapsed)
test_list2 = np.random.randint(0, 2, size=10000).tolist()
start = timer()
test_list2 = quick_sort(test_list2, compare_asc)
elapsed = timer() - start
print(elapsed)
In this example i get RecursionError: maximum recursion depth exceeded in comparison, so i added sys.setrecursionlimit(1000000) and after that i get this output:
0.030029324000224733
5.489867554000284
Can anyone explain why it throws this recursion depth error only during sorting 2nd list ? And why there it is such big time difference ?
Here's a hint: pass a list where all the elements are the same, and watch what it does line by line. It will take time quadratic in the number of elements, and recurse to a level approximately equal to the number of elements.
The usual quicksort partition implementations proceed from both ends, so that in the all-equal case the list slice is approximately cut in half. You can get decent performance in this case for your "only look left-to-right" approach, but the clearest way to do so is to partition into three regions: "less than", "equal", and "greater than".
That can be done in a single left-to-right pass, and is usually called the "Dutch national flag problem". As the text on the linked page says,
The solution to this problem is of interest for designing sorting algorithms; in particular, variants of the quicksort algorithm that must be robust to repeated elements need a three-way partitioning function ...
CODE
For concreteness, here's a complete implementation doing one-pass "left to right" single-pivot 3-way partitioning. It also incorporates other well-known changes needed to make a quicksort robust for production use. Note:
You cannot create a pure quicksort that avoids worst-case quadratic time. The best you can do is average-case O(N*log(N)) time, and (as below, for one way) make worst-case O(N**2) time unlikely.
You can (as below) guarantee worst-case logarithmic recursion depth.
In this approach, a list of all-equal elements is not a bad case, but a very good case: the partitioning routine is called just once total.
The code:
from random import randrange
def partition(a, lo, hi, pivot):
i = L = lo
R = hi
# invariants:
# a[lo:L] < pivot
# a[L:i] == pivot
# a[i:R] unknown
# a[R:hi] > pivot
while i < R:
elt = a[i]
if elt < pivot:
a[L], a[i] = elt, a[L]
L += 1
i += 1
elif elt > pivot:
R -= 1
a[R], a[i] = elt, a[R]
else:
i += 1
return L, R
def qsort(a, lo=0, hi=None):
if hi is None:
hi = len(a)
while True: # sort a[lo:hi] in place
if hi - lo <= 1:
return
# select pivot ar random; else it's easy to construct
# inputs that systematically require quadratic time
L, R = partition(a, lo, hi, a[randrange(lo, hi)])
# must recur on only the shorter chunk to guarantee
# worst-case recursion depth is logarithmic in hi-lo
if L - lo <= hi - R:
qsort(a, lo, L)
# loop to do qsort(a, R, hi)
lo = R
else:
qsort(a, R, hi)
# loop to do qsort(a, lo, L)
hi = L

Converting a function with two recursive calls into an interative function

I've got a function that has two recursive calls and I'm trying to convert it to an iterative function. I've got it figured out where I can do it with one call fairly easily, but I can't figure out how to incorporate the other call.
the function:
def specialMultiplication(n):
if n < 2:
return 1
return n * specialMultiplication(n-1) * specialMultiplication(n-2)
If I just had one of them, it would be really easily:
def specialMult(n, mult = 1):
while n > 1:
(n, mult) = (n-1, n * mult) # Or n-2 for the second one
return mult
I just can't figure out how to add the second call in to get the right answer overall. Thanks!
If you don't mind changing the structure of your algorithm a bit more, you can calculate the values in a bottom-up fashion, starting with the smallest values.
def specialMultiplication(max_n):
a = b = 1
for n in range(1, max_n+1):
a, b = b, a*b*n
return b
Convert the recursion to an iterative function using an auxiliary "todo list":
def specialMultiplication(n):
to_process = []
result = 1
if n >= 2:
to_process.append(n)
while to_process: # while list is not empty
n = to_process.pop()
result *= n
if n >= 3:
to_process.append(n-1)
if n >= 4:
to_process.append(n-2)
return result
create a work list (to_process)
if n >= 2, add n to the list
while to_process is not empty, pop item from list, multiply to result
if n-1 < 2, don't perform "left" operation (don't append to work list)
if n-2 < 2, don't perform "right" operation (don't append to work list)
This method has the advantage of consuming less stack. I've checked the results against recursive version for values from 1 to 25 and they were equal.
Note that it's still slow, since complexity is O(2^n) so it's beginning to be really slow from n=30 (time doubles when n increases by 1). n=28 is computed in 12 seconds on my laptop.
I've successfully used this method to fix a stack overflow problem when performing a flood fill algorithm: Fatal Python error: Cannot recover from stack overflow. During Flood Fill but here Blcknght answer is more adapted because it rethinks the way of computing it from the start.
The OP's function has the same recursive structure as the Fibonacci and Lucas functions, just with different values for f0, f1, and g:
f(0) = f0
f(1) = f1
f(n) = g(f(n-2), f(n-1), n)
This is an example of a recurrence relation. Here is an iterative version of the general solution that calculates f(n) in n steps. It corresponds to a bottom-up tail recursion.
def f(n):
if not isinstance(n, int): # Can be loosened a bit
raise TypeError('Input must be an int') # Can be more informative
if n < 0:
raise ValueError('Input must be non-negative')
if n == 0:
return f0
i, fi_1, fi = 1, f0, f1 # invariant: fi_1, fi = f(i-1), f(i)
while i < n:
i += 1
fi_1, fi = fi, g(fi_1, fi, n) # restore invariant for new i
return fi
Blckknight's answer is a simplified version of this

Python: Function not changing elements in array

I know that in Python, variables are passed by giving a copy of the reference to the object. But I do not understand why in the following piece of code I wrote, the function Partition does not change the elements of arr.
def Partition(arr, lo, hi):
pivot = arr[lo]
i = lo
j = hi
while(True):
while(arr[i] < pivot):
i += 1
if i == hi: break
while(arr[j] > pivot):
j -= 1
if j == lo: break
if i >= j : break #check if ptrs cross
arr[i], arr[j] = arr[j], arr[i]
#swap lo and j
arr[lo], arr[j] = arr[j], arr[lo]
return j
def Sort(arr, start, end):
if (end <= start): return
right = Partition(arr, start, end)
Sort(arr, start, right-1)
Sort(arr, right+1, end)
Your Partition function has a logic problem.
If you follow it with a debugger, you will see it always gets the array back to its initial state before returning. The array would actually be seen as modified, the problem is that after fiddling with it for a while, it gets back to exactly how it was when entering the function.
Do you use debugging tools? If not, start doing so now.
If so, put a breakpoint on the return j statement, and examine the array, you will see what I mean.
You are trying to implement a Hoare partitioning, right?
I think you got it somewhat mixed up. The issue is because you're comparing against the pivot before the first loop iteration, you end up comparing the elements you just swapped again.
This seems to be a bug:
Assume arr = [1,3,4,7,5,8], lo=3, hi=6
def Partition(arr, lo, hi):
pivot = arr[lo] <- this is arr[3] = 7
i = lo <- i = 3
j = hi
while(True):
while(arr[i] < pivot): <- arr[3] = 7 so condition fails hence no swap

Categories

Resources