QuickSort works slow for low range data - python

lately im comparing different types of sort algorithms in python. I noticed that my quicksort isnt handling well inputs where values are repeated.
def compare_asc(a, b):
return a <= b
def partition(a, p, r, compare):
pivot = a[r]
i = p-1
for j in range(p, r):
if compare(a[j], pivot):
i += 1
a[i], a[j] = a[j], a[i]
a[i+1], a[r] = a[r], a[i+1]
return i + 1
def part_quick_sort(a, p, r, compare):
if p < r:
q = partition(a, p, r, compare)
part_quick_sort(a, p, q-1, compare)
part_quick_sort(a, q+1, r, compare)
def quick_sort(a, compare):
part_quick_sort(a, 0, len(a)-1, compare)
return a
Then I test this
import numpy as np
from timeit import default_timer as timer
import sys
test_list1 = np.random.randint(-10000, 10000, size=10000).tolist()
start = timer()
test_list1 = quick_sort(test_list1, compare_asc)
elapsed = timer() - start
print(elapsed)
test_list2 = np.random.randint(0, 2, size=10000).tolist()
start = timer()
test_list2 = quick_sort(test_list2, compare_asc)
elapsed = timer() - start
print(elapsed)
In this example i get RecursionError: maximum recursion depth exceeded in comparison, so i added sys.setrecursionlimit(1000000) and after that i get this output:
0.030029324000224733
5.489867554000284
Can anyone explain why it throws this recursion depth error only during sorting 2nd list ? And why there it is such big time difference ?

Here's a hint: pass a list where all the elements are the same, and watch what it does line by line. It will take time quadratic in the number of elements, and recurse to a level approximately equal to the number of elements.
The usual quicksort partition implementations proceed from both ends, so that in the all-equal case the list slice is approximately cut in half. You can get decent performance in this case for your "only look left-to-right" approach, but the clearest way to do so is to partition into three regions: "less than", "equal", and "greater than".
That can be done in a single left-to-right pass, and is usually called the "Dutch national flag problem". As the text on the linked page says,
The solution to this problem is of interest for designing sorting algorithms; in particular, variants of the quicksort algorithm that must be robust to repeated elements need a three-way partitioning function ...
CODE
For concreteness, here's a complete implementation doing one-pass "left to right" single-pivot 3-way partitioning. It also incorporates other well-known changes needed to make a quicksort robust for production use. Note:
You cannot create a pure quicksort that avoids worst-case quadratic time. The best you can do is average-case O(N*log(N)) time, and (as below, for one way) make worst-case O(N**2) time unlikely.
You can (as below) guarantee worst-case logarithmic recursion depth.
In this approach, a list of all-equal elements is not a bad case, but a very good case: the partitioning routine is called just once total.
The code:
from random import randrange
def partition(a, lo, hi, pivot):
i = L = lo
R = hi
# invariants:
# a[lo:L] < pivot
# a[L:i] == pivot
# a[i:R] unknown
# a[R:hi] > pivot
while i < R:
elt = a[i]
if elt < pivot:
a[L], a[i] = elt, a[L]
L += 1
i += 1
elif elt > pivot:
R -= 1
a[R], a[i] = elt, a[R]
else:
i += 1
return L, R
def qsort(a, lo=0, hi=None):
if hi is None:
hi = len(a)
while True: # sort a[lo:hi] in place
if hi - lo <= 1:
return
# select pivot ar random; else it's easy to construct
# inputs that systematically require quadratic time
L, R = partition(a, lo, hi, a[randrange(lo, hi)])
# must recur on only the shorter chunk to guarantee
# worst-case recursion depth is logarithmic in hi-lo
if L - lo <= hi - R:
qsort(a, lo, L)
# loop to do qsort(a, R, hi)
lo = R
else:
qsort(a, R, hi)
# loop to do qsort(a, lo, L)
hi = L

Related

Quicksort RecursionError

I wrote a code about Quick sort and the problem is, I get a Recursion Error for large Arrays with, for Example, over 1000 values. However, I don't have the problem with small Arrays. I don't know why I get the Error. Can someone help me?
My Code:
last = object()
def quicksort(array, start= 0, ende = last):
if ende is last:
ende = (len(array) -1)
def partition(array, anfang, ende):
piv_index = anfang
piv = array[piv_index]
while anfang < ende:
while anfang < len(array) and array[anfang] <= piv:
anfang += 1
while array[ende] > piv:
ende -= 1
if anfang < ende:
array[anfang], array[ende] = array[ende], array[anfang]
array[ende], array[piv_index] = array[piv_index], array[ende]
return ende
if start < ende:
p = partition(array, start, ende)
quicksort(array, start, p-1)
quicksort(array, p+1, ende)
return(array)
As mentioned in the comments by Barmar there is a recursion limit in Python which seems to be 1000.
You can see the actual value on your system with the function getrecursionlimit(), which on my system is 1000 (Python 3).
You can set the recursion limit higher if you need with getrecursionlimit(imit).
You say that you run into the recursion error when testing with 1000 numbers. You must be testing a strictly descending ordered list, e.g. 1000, 999, 998, ..., 1 which will hit the limit.
Actually with the recursion limit set to 1000
and such an input list with 999 items
and your program choice of the pivot
you will hit the limit.
With 998 numbers you'll be ok.
Choosing a higher recursion limit may make sense if you know the maximum size of the array. Otherwise you have to consider a possible recursion error.
What you could do apart from choosing another suitable sort algorithm:
set the recursion limit to suit your needs if possible
choose a pivot randomly/accordingly (there are several options here)
rewrite the Quicksort program so that it is iterative (that's more difficult for some, but spares you the recursion depth problem)
The options have their pros and cons of course, so you have to make a compromise (as I hear quite often the saying you have to die one of the deaths).
To avoid stack overflow, stack space can be limited to O(log2(n)) by recursing on smaller partition, and looping for larger partition. Worst case time complexity remains O(n^2). Also the code is a variation of Hoare partition scheme, which needs some clean up. Example code that uses middle element for pivot with partition logic included in the quick sort function.
def qsort(a, lo, hi): # hoare, post inc, dec
while(lo < hi):
p = a[(lo + hi) // 2] # pivot
i = lo
j = hi
while(i <= j):
while(a[i] < p):
i += 1
while(a[j] > p):
j -= 1
if(i > j):
break
a[i],a[j] = a[j],a[i]
i += 1
j -= 1
# recurse on smaller part, loop on larger part
if((j - lo) <= (hi - i)):
qsort(a, lo, j)
lo = i
else:
qsort(a, i, hi)
hi = j

python time complexity quick sort algorithm

def partition(A, l, r):
p = A[l]
stack = A[l]
A[l] = A[r]
A[r] = stack
s = l
for i in range(l, r):
if A[i] <= p:
stack2 = A[i]
A[i] = A[s]
A[s] = stack2
s += 1
stack3 = A[s]
A[s] = A[r]
A[r] = stack3
return s
def quicksort(A, l, r):
if l < r:
q = partition(A, l, r)
quicksort(A, l, q - 1)
quicksort(A, q + 1, r)
return A
I've written "maybe" quicksort algorithm, as I've noticed here the time complexity of partition was O(n) because of the for loop, Also the complexity in quicksort seems to be at least O(n). The question: how is it possible for the entire code to have total time complexity of O(nlogn).
You partition by 2 each level till you get individual elements. Partitioning and comparison what makes the time complexity. You make n comparisons in each level and you will be partioning log2(n) times.
In the worst case, your array is already sorted and you will be partitioning n times and still make n comparisons in each level.
Your sorting function isn't O(nlogn). In the worst case, you are making O(n) recursive calls.
As a simple test:
def test(n):
nums = list(reversed(range(n)))
return sum(quicksort(nums,0,n-1))
Then, for example, test(1100) triggers:
RecursionError: maximum recursion depth exceeded in comparison
Which wouldn't happen if you were calling partition just log(n) times.
On the other hand,
import random
def test2(n):
nums = list(range(n))
random.shuffle(nums)
return sum(quicksort(nums,0,n-1))
works well even for calls like test2(100000), so you do have average case O(nlogn) complexity. This is easy to confirm numerically but difficult to prove. See https://en.wikipedia.org/wiki/Quicksort for a proof.

Entropy Estimator based on the Lempel-Ziv algorithm using Python

This function allows to estimate the entropy of a time series. It is based on the Lempel-Ziv compression algorithm. For a time series of length n, the entropy is estimate as:
E= (1/n SUM_i L_i )^-1 ln(n)
where L_i is the longness of the shortest substring starting at position i which doesn't previously appear from position 1 to i-1. The estimated entropy converges to the real entropy of the time series when n approaches to infinity.
There is already an implementation in MATLAB functions:
https://cn.mathworks.com/matlabcentral/fileexchange/51042-entropy-estimator-based-on-the-lempel-ziv-algorithm?s_tid=prof_contriblnk
I would like to implement is in Python and I did it like this:
def contains(small, big):
for i in range(len(big)-len(small)+1):
if big[i:i+len(small)] == small:
return True
return False
def actual_entropy(l):
n = len(l)
sequence = [l[0]]
sum_gamma = 0
for i in range(1, n):
for j in range(i+1, n+1):
s = l[i:j]
if contains(s, sequence) != True:
sum_gamma += len(s)
sequence.append(l[i])
break
ae = 1 / (sum_gamma / n ) * math.log(n)
return ae
However, I found it calculate too slow when the data size is getting bigger. For example, I used a list of 23832 elements as an input and time consumed is like this: (data can be found here)
0-1000: 1.7068431377410889 s
1000-2000: 18.561192989349365 s
2000-3000: 84.82257103919983 s
3000-4000: 243.5819959640503 s
...
I have thousands of lists like this to be calculated and such long time is unbearable. How should I optimize this function and make it work faster?
I played around a bit and tried a few different approaches from another thread on StackOverflow. And this is the code I came up with:
def contains(small, big):
try:
big.tostring().index(small.tostring())//big.itemsize
return True
except ValueError:
return False
def actual_entropy(l):
n = len(l)
sum_gamma = 0
for i in range(1, n):
sequence = l[:i]
for j in range(i+1, n+1):
s = l[i:j]
if contains(s, sequence) != True:
sum_gamma += len(s)
break
ae = 1 / (sum_gamma / n) * math.log(n)
return ae
Funnily enough casting the numpy arrays to strings is faster than working with strings directly. A very crude benchmark of my code on my machine with the data you provide is:
N: my code - your code
1000: 0.039s - 1.039s
2000: 0.266s - 18.490s
3000: 0.979s - 74.761s
4000: 2.891s - 285.488s
You maybe can make this even faster if you parallelize the outer loop.

Converting a function with two recursive calls into an interative function

I've got a function that has two recursive calls and I'm trying to convert it to an iterative function. I've got it figured out where I can do it with one call fairly easily, but I can't figure out how to incorporate the other call.
the function:
def specialMultiplication(n):
if n < 2:
return 1
return n * specialMultiplication(n-1) * specialMultiplication(n-2)
If I just had one of them, it would be really easily:
def specialMult(n, mult = 1):
while n > 1:
(n, mult) = (n-1, n * mult) # Or n-2 for the second one
return mult
I just can't figure out how to add the second call in to get the right answer overall. Thanks!
If you don't mind changing the structure of your algorithm a bit more, you can calculate the values in a bottom-up fashion, starting with the smallest values.
def specialMultiplication(max_n):
a = b = 1
for n in range(1, max_n+1):
a, b = b, a*b*n
return b
Convert the recursion to an iterative function using an auxiliary "todo list":
def specialMultiplication(n):
to_process = []
result = 1
if n >= 2:
to_process.append(n)
while to_process: # while list is not empty
n = to_process.pop()
result *= n
if n >= 3:
to_process.append(n-1)
if n >= 4:
to_process.append(n-2)
return result
create a work list (to_process)
if n >= 2, add n to the list
while to_process is not empty, pop item from list, multiply to result
if n-1 < 2, don't perform "left" operation (don't append to work list)
if n-2 < 2, don't perform "right" operation (don't append to work list)
This method has the advantage of consuming less stack. I've checked the results against recursive version for values from 1 to 25 and they were equal.
Note that it's still slow, since complexity is O(2^n) so it's beginning to be really slow from n=30 (time doubles when n increases by 1). n=28 is computed in 12 seconds on my laptop.
I've successfully used this method to fix a stack overflow problem when performing a flood fill algorithm: Fatal Python error: Cannot recover from stack overflow. During Flood Fill but here Blcknght answer is more adapted because it rethinks the way of computing it from the start.
The OP's function has the same recursive structure as the Fibonacci and Lucas functions, just with different values for f0, f1, and g:
f(0) = f0
f(1) = f1
f(n) = g(f(n-2), f(n-1), n)
This is an example of a recurrence relation. Here is an iterative version of the general solution that calculates f(n) in n steps. It corresponds to a bottom-up tail recursion.
def f(n):
if not isinstance(n, int): # Can be loosened a bit
raise TypeError('Input must be an int') # Can be more informative
if n < 0:
raise ValueError('Input must be non-negative')
if n == 0:
return f0
i, fi_1, fi = 1, f0, f1 # invariant: fi_1, fi = f(i-1), f(i)
while i < n:
i += 1
fi_1, fi = fi, g(fi_1, fi, n) # restore invariant for new i
return fi
Blckknight's answer is a simplified version of this

Index out of range in implementation of a variation of mergesort algorithm in python?

I have done a variation of my merge sort algorithm in python, based on what I've learnt from the CLRS book, and compared it with the implementation done on the introductory computer science book by MIT. I cannot find the problem in my algorithm, and the IDLE gives me an index out of range although everything looks fine to me. I'm unsure if this is due to some confusion in borrowing ideas from the MIT algorithm (see below).
lista = [1,2,3,1,1,1,1,6,7,12,2,7,7,67,4,7,9,6,6,3,1,14,4]
def merge(A, p, q, r):
q = (p+r)/2
L = A[p:q+1]
R = A[q+1:r]
i = 0
j = 0
for k in range(len(A)):
#if the list R runs of of space and L[i] has nothing to compare
if i+1 > len(R):
A[k] = L[i]
i += 1
elif j+1 > len(L):
A[k] = R[j]
j += 1
elif L[i] <= R[j]:
A[k] = L[i]
i += 1
elif R[j] <= L[i]:
A[k] = R[j]
j += 1
#when both the sub arrays have run out and all the ifs and elifs done,
# the for loop has effectively ended
return A
def mergesort(A, p, r):
"""A is the list, p is the first index and r is the last index for which
the portion of the list is to be sorted."""
q = (p+r)/2
if p<r:
mergesort(A, p, q)
mergesort(A, q+1, r)
merge (A, p, q, r)
return A
print mergesort(lista, 0, len(lista)-1)
I have followed the pseudocode in CLRS as closely as I could, just without using the "infinity value" at the end of L and R, which would continue to compare (is this less efficient?). I tried to incorporate ideas like that in the MIT book, which is to simply copy down the remaining L or R list to A, to mutate A and return a sorted list. However, I can't seem to find what has went wrong with it. Also, I don't get why the pseudo code requires a 'q' as an input, given that q would be calculated as (p+q)/2 for the middle index anyway. And why is there a need to put p
On the other hand, from the MIT book, we have something that looks really elegant.
def merge(left, right, compare):
"""Assumes left and right are sorted lists and
compare defines an ordering on the elements.
Returns a new sorted(by compare) list containing the
same elements as(left + right) would contain.
"""
result = []
i, j = 0, 0
while i < len(left) and j < len(right):
if compare(left[i], right[j]):
result.append(left[i])
i += 1
else :
result.append(right[j])
j += 1
while (i < len(left)):
result.append(left[i])
i += 1
while (j < len(right)):
result.append(right[j])
j += 1
return result
import operator
def mergeSort(L, compare = operator.lt):
"""Assumes L is a list, compare defines an ordering
on elements of L.
Returns a new sorted list containing the same elements as L"""
if len(L) < 2:
return L[: ]
else :
middle = len(L) //2
left = mergeSort(L[: middle], compare)
right = mergeSort(L[middle: ], compare)
return merge(left, right, compare)
Where could I have gone wrong?
Also, I think the key difference in the MIT implementation is that it creates a new list instead of mutating the original list. This makes it quite difficult for me to understand mergesort, because I found the CLRS explanation quite clear, by understanding it in terms of different layers of recursion occurring to sort the most minute components of the original list (the list of length 1 that needs no sorting), thus "storing" the results of recursion within the old list itself.
However, thinking again, is it right to say that the "result" returned by each recursion in the MIT algorithm, which is in turn combined?
Thank you!
the fundamental difference between your code and the MIT is the conditional statement in the mergesort function. Where your if statement is:
if p<r:
theirs is:
if len(L) < 2:
This means that if you were to have, at any point in the recursive call tree, a list that is of len(A) == 1, then it would still call merge on a size 1 or even 0 list. You can see that this causes problems in the merge function because then your L, R, or both sub lists can end up being of size 0, which would then cause an out if bounds index error.
your problem could then be easily fixed by changing your if statement to something alike to theirs, like len(A) < 2 or r-p < 2

Categories

Resources