Converting a function with two recursive calls into an interative function - python

I've got a function that has two recursive calls and I'm trying to convert it to an iterative function. I've got it figured out where I can do it with one call fairly easily, but I can't figure out how to incorporate the other call.
the function:
def specialMultiplication(n):
if n < 2:
return 1
return n * specialMultiplication(n-1) * specialMultiplication(n-2)
If I just had one of them, it would be really easily:
def specialMult(n, mult = 1):
while n > 1:
(n, mult) = (n-1, n * mult) # Or n-2 for the second one
return mult
I just can't figure out how to add the second call in to get the right answer overall. Thanks!

If you don't mind changing the structure of your algorithm a bit more, you can calculate the values in a bottom-up fashion, starting with the smallest values.
def specialMultiplication(max_n):
a = b = 1
for n in range(1, max_n+1):
a, b = b, a*b*n
return b

Convert the recursion to an iterative function using an auxiliary "todo list":
def specialMultiplication(n):
to_process = []
result = 1
if n >= 2:
to_process.append(n)
while to_process: # while list is not empty
n = to_process.pop()
result *= n
if n >= 3:
to_process.append(n-1)
if n >= 4:
to_process.append(n-2)
return result
create a work list (to_process)
if n >= 2, add n to the list
while to_process is not empty, pop item from list, multiply to result
if n-1 < 2, don't perform "left" operation (don't append to work list)
if n-2 < 2, don't perform "right" operation (don't append to work list)
This method has the advantage of consuming less stack. I've checked the results against recursive version for values from 1 to 25 and they were equal.
Note that it's still slow, since complexity is O(2^n) so it's beginning to be really slow from n=30 (time doubles when n increases by 1). n=28 is computed in 12 seconds on my laptop.
I've successfully used this method to fix a stack overflow problem when performing a flood fill algorithm: Fatal Python error: Cannot recover from stack overflow. During Flood Fill but here Blcknght answer is more adapted because it rethinks the way of computing it from the start.

The OP's function has the same recursive structure as the Fibonacci and Lucas functions, just with different values for f0, f1, and g:
f(0) = f0
f(1) = f1
f(n) = g(f(n-2), f(n-1), n)
This is an example of a recurrence relation. Here is an iterative version of the general solution that calculates f(n) in n steps. It corresponds to a bottom-up tail recursion.
def f(n):
if not isinstance(n, int): # Can be loosened a bit
raise TypeError('Input must be an int') # Can be more informative
if n < 0:
raise ValueError('Input must be non-negative')
if n == 0:
return f0
i, fi_1, fi = 1, f0, f1 # invariant: fi_1, fi = f(i-1), f(i)
while i < n:
i += 1
fi_1, fi = fi, g(fi_1, fi, n) # restore invariant for new i
return fi
Blckknight's answer is a simplified version of this

Related

How to find the recurrence relation, and calculate Master Theorem of a Merge Sort Code?

I'm trying to find the Master Theorem of this Merge Sort Code, but first I need to find its recurrence relation, but I'm struggling to do and understand both. I already saw some similar questions here, but couldn't understand the explanations, like, first I need to find how many operations the code has? Could someone help me with that?
def mergeSort(alist):
print("Splitting ",alist)
if len(alist)>1:
mid = len(alist)//2
lefthalf = alist[:mid]
righthalf = alist[mid:]
mergeSort(lefthalf)
mergeSort(righthalf)
i=0
j=0
k=0
while i < len(lefthalf) and j < len(righthalf):
if lefthalf[i] < righthalf[j]:
alist[k]=lefthalf[i]
i=i+1
else:
alist[k]=righthalf[j]
j=j+1
k=k+1
while i < len(lefthalf):
alist[k]=lefthalf[i]
i=i+1
k=k+1
while j < len(righthalf):
alist[k]=righthalf[j]
j=j+1
k=k+1
print("Merging ",alist)
alist = [54,26,93,17,77,31,44,55,20]
mergeSort(alist)
print(alist)
To determine the run-time of a divide-and-conquer algorithm using the Master Theorem, you need to express the algorithm's run-time as a recursive function of input size, in the form:
T(n) = aT(n/b) + f(n)
T(n) is how we're expressing the total runtime of the algorithm on an input size n.
a stands for the number of recursive calls the algorithm makes.
T(n/b) represents the recursive calls: The n/b signifies that the input size to the recursive calls is some particular fraction of original input size (the divide part of divide-and-conquer).
f(n) represents the amount of work you need to do to in the main body of the algorithm, generally just to combine solutions from recursive calls into an overall solution (you could say this is the conquer part).
Here's a slightly re-factored definition of mergeSort:
def mergeSort(arr):
if len(arr) <= 1: return # array size 1 or 0 is already sorted
# split the array in half
mid = len(arr)//2
L = arr[:mid]
R = arr[mid:]
mergeSort(L) # sort left half
mergeSort(R) # sort right half
merge(L, R, arr) # merge sorted halves
We need to determine, a, n/b and f(n)
Because each call of mergeSort makes two recursive calls: mergeSort(L) and mergeSort(R), a=2:
T(n) = 2T(n/b) + f(n)
n/b represents the fraction of the current input that recursive calls are made with. Because we are finding the midpoint and splitting the input in half, passing one half the current array to each recursive call, n/b = n/2 and b=2. (if each recursive call instead got 1/4 of the original array b would be 4)
T(n) = 2T(n/2) + f(n)
f(n) represents all the work the algorithm does besides making recursive calls. Every time we call mergeSort, we calculate the midpoint in O(1) time.
We also split the array into L and R, and technically creating these two sub-array copies is O(n). Then, presuming mergeSort(L), sorted the left half of the array, and mergeSort(R) sorted the right half, we still have to merge the sorted sub-arrays together to sort the entire array with the merge function. Together, this makes f(n) = O(1) + O(n) + complexity of merge. Now let's take a look at merge:
def merge(L, R, arr):
i = j = k = 0 # 3 assignments
while i < len(L) and j < len(R): # 2 comparisons
if L[i] < R[j]: # 1 comparison, 2 array idx
arr[k] = L[i] # 1 assignment, 2 array idx
i += 1 # 1 assignment
else:
arr[k] = R[j] # 1 assignment, 2 array idx
j += 1 # 1 assignment
k += 1 # 1 assignment
while i < len(L): # 1 comparison
arr[k] = L[i] # 1 assignment, 2 array idx
i += 1 # 1 assignment
k += 1 # 1 assignment
while j < len(R): # 1 comparison
arr[k] = R[j] # 1 assignment, 2 array idx
j += 1 # 1 assignment
k += 1 # 1 assignment
This function has more going on, but we just need to get it's overall complexity class to be able to apply the Master Theorem accurately. We can count every single operation, that is, every comparison, array index, and assignment, or just reason about it more generally. Generally speaking, you can say that across the three while loops we are going to iterate through every member of L and R and assign them in order to the output array, arr, doing a constant amount of work for each element. Noting that we are processing every element of L and R (n total elements) and doing a constant amount of work for each element would be enough to say that merge is in O(n).
But, you can get more particular with counting operations if you want. For the first while loop, every iteration we make 3 comparisons, 5 array indexes, and 2 assignments (constant numbers), and the loop runs until one of L and R is fully processed. Then, one of the next two while loops may run to process any leftover elements from the other array, performing 1 comparison, 2 array indexes, and 3 variable assignments for each of those elements (constant work). Therefore, because each of the n total elements of L and R cause at most a constant number of operations to be performed across the while loops (either 10 or 6, by my count, so at most 10), and the i=j=k=0 statement is only 3 constant assignments, merge is in O(3 + 10*n) = O(n). Returning to the overall problem, this means:
f(n) = O(1) + O(n) + complexity of merge
= O(1) + O(n) + O(n)
= O(2n + 1)
= O(n)
T(n) = 2T(n/2) + n
One final step before we apply the Master Theorem: we want f(n) written as n^c. For f(n) = n = n^1, c=1. (Note: things change very slightly if f(n) = n^c*log^k(n) rather than simply n^c, but we don't need to worry about that here)
You can now apply the Master Theorem, which in its most basic form says to compare a (how quickly the number of recursive calls grows) to b^c (how quickly the amount of work per recursive call shrinks). There are 3 possible cases, the logic of which I try to explain, but you can ignore the parenthetical explanations if they aren't helpful:
a > b^c, T(n) = O(n^log_b(a)). (The total number of recursive calls is growing faster than the work per call is shrinking, so the total work is determined by the number of calls at the bottom level of the recursion tree. The number of calls starts at 1 and is multiplied by a log_b(n) times because log_b(n) is the depth of the recursion tree. Therefore, total work = a^log_b(n) = n^log_b(a))
a = b^c, T(n) = O(f(n)*log(n)). (The growth in number of calls is balanced by the decrease in work per call. The work at each level of the recursion tree is therefore constant, so total work is just f(n)*(depth of tree) = f(n)*log_b(n) = O(f(n)*log(n))
a < b^c, T(n) = O(f(n)). (The work per call shrinks faster than the number of calls increases. Total work is therefore dominated by the work at the top level of the recursion tree, which is just f(n))
For the case of mergeSort, we've seen that a = 2, b = 2, and c = 1. As a = b^c, we apply the 2nd case:
T(n) = O(f(n)*log(n)) = O(n*log(n))
And you're done. This may seem like a lot work, but coming up with a recurrence for T(n) gets easier the more you do it, and once you have a recurrence it's very quick to check which case it falls under, making the Master Theorem quite a useful tool for solving more complicated divide/conquer recurrences.

QuickSort works slow for low range data

lately im comparing different types of sort algorithms in python. I noticed that my quicksort isnt handling well inputs where values are repeated.
def compare_asc(a, b):
return a <= b
def partition(a, p, r, compare):
pivot = a[r]
i = p-1
for j in range(p, r):
if compare(a[j], pivot):
i += 1
a[i], a[j] = a[j], a[i]
a[i+1], a[r] = a[r], a[i+1]
return i + 1
def part_quick_sort(a, p, r, compare):
if p < r:
q = partition(a, p, r, compare)
part_quick_sort(a, p, q-1, compare)
part_quick_sort(a, q+1, r, compare)
def quick_sort(a, compare):
part_quick_sort(a, 0, len(a)-1, compare)
return a
Then I test this
import numpy as np
from timeit import default_timer as timer
import sys
test_list1 = np.random.randint(-10000, 10000, size=10000).tolist()
start = timer()
test_list1 = quick_sort(test_list1, compare_asc)
elapsed = timer() - start
print(elapsed)
test_list2 = np.random.randint(0, 2, size=10000).tolist()
start = timer()
test_list2 = quick_sort(test_list2, compare_asc)
elapsed = timer() - start
print(elapsed)
In this example i get RecursionError: maximum recursion depth exceeded in comparison, so i added sys.setrecursionlimit(1000000) and after that i get this output:
0.030029324000224733
5.489867554000284
Can anyone explain why it throws this recursion depth error only during sorting 2nd list ? And why there it is such big time difference ?
Here's a hint: pass a list where all the elements are the same, and watch what it does line by line. It will take time quadratic in the number of elements, and recurse to a level approximately equal to the number of elements.
The usual quicksort partition implementations proceed from both ends, so that in the all-equal case the list slice is approximately cut in half. You can get decent performance in this case for your "only look left-to-right" approach, but the clearest way to do so is to partition into three regions: "less than", "equal", and "greater than".
That can be done in a single left-to-right pass, and is usually called the "Dutch national flag problem". As the text on the linked page says,
The solution to this problem is of interest for designing sorting algorithms; in particular, variants of the quicksort algorithm that must be robust to repeated elements need a three-way partitioning function ...
CODE
For concreteness, here's a complete implementation doing one-pass "left to right" single-pivot 3-way partitioning. It also incorporates other well-known changes needed to make a quicksort robust for production use. Note:
You cannot create a pure quicksort that avoids worst-case quadratic time. The best you can do is average-case O(N*log(N)) time, and (as below, for one way) make worst-case O(N**2) time unlikely.
You can (as below) guarantee worst-case logarithmic recursion depth.
In this approach, a list of all-equal elements is not a bad case, but a very good case: the partitioning routine is called just once total.
The code:
from random import randrange
def partition(a, lo, hi, pivot):
i = L = lo
R = hi
# invariants:
# a[lo:L] < pivot
# a[L:i] == pivot
# a[i:R] unknown
# a[R:hi] > pivot
while i < R:
elt = a[i]
if elt < pivot:
a[L], a[i] = elt, a[L]
L += 1
i += 1
elif elt > pivot:
R -= 1
a[R], a[i] = elt, a[R]
else:
i += 1
return L, R
def qsort(a, lo=0, hi=None):
if hi is None:
hi = len(a)
while True: # sort a[lo:hi] in place
if hi - lo <= 1:
return
# select pivot ar random; else it's easy to construct
# inputs that systematically require quadratic time
L, R = partition(a, lo, hi, a[randrange(lo, hi)])
# must recur on only the shorter chunk to guarantee
# worst-case recursion depth is logarithmic in hi-lo
if L - lo <= hi - R:
qsort(a, lo, L)
# loop to do qsort(a, R, hi)
lo = R
else:
qsort(a, R, hi)
# loop to do qsort(a, lo, L)
hi = L

Entropy Estimator based on the Lempel-Ziv algorithm using Python

This function allows to estimate the entropy of a time series. It is based on the Lempel-Ziv compression algorithm. For a time series of length n, the entropy is estimate as:
E= (1/n SUM_i L_i )^-1 ln(n)
where L_i is the longness of the shortest substring starting at position i which doesn't previously appear from position 1 to i-1. The estimated entropy converges to the real entropy of the time series when n approaches to infinity.
There is already an implementation in MATLAB functions:
https://cn.mathworks.com/matlabcentral/fileexchange/51042-entropy-estimator-based-on-the-lempel-ziv-algorithm?s_tid=prof_contriblnk
I would like to implement is in Python and I did it like this:
def contains(small, big):
for i in range(len(big)-len(small)+1):
if big[i:i+len(small)] == small:
return True
return False
def actual_entropy(l):
n = len(l)
sequence = [l[0]]
sum_gamma = 0
for i in range(1, n):
for j in range(i+1, n+1):
s = l[i:j]
if contains(s, sequence) != True:
sum_gamma += len(s)
sequence.append(l[i])
break
ae = 1 / (sum_gamma / n ) * math.log(n)
return ae
However, I found it calculate too slow when the data size is getting bigger. For example, I used a list of 23832 elements as an input and time consumed is like this: (data can be found here)
0-1000: 1.7068431377410889 s
1000-2000: 18.561192989349365 s
2000-3000: 84.82257103919983 s
3000-4000: 243.5819959640503 s
...
I have thousands of lists like this to be calculated and such long time is unbearable. How should I optimize this function and make it work faster?
I played around a bit and tried a few different approaches from another thread on StackOverflow. And this is the code I came up with:
def contains(small, big):
try:
big.tostring().index(small.tostring())//big.itemsize
return True
except ValueError:
return False
def actual_entropy(l):
n = len(l)
sum_gamma = 0
for i in range(1, n):
sequence = l[:i]
for j in range(i+1, n+1):
s = l[i:j]
if contains(s, sequence) != True:
sum_gamma += len(s)
break
ae = 1 / (sum_gamma / n) * math.log(n)
return ae
Funnily enough casting the numpy arrays to strings is faster than working with strings directly. A very crude benchmark of my code on my machine with the data you provide is:
N: my code - your code
1000: 0.039s - 1.039s
2000: 0.266s - 18.490s
3000: 0.979s - 74.761s
4000: 2.891s - 285.488s
You maybe can make this even faster if you parallelize the outer loop.

What would be the best answer for this Fibonacci excercise in Python?

What's the best answer for this Fibonacci exercise in Python?
http://www.scipy-lectures.org/intro/language/functions.html#exercises
Exercise: Fibonacci sequence
Write a function that displays the n first terms of the Fibonacci
sequence, defined by:
u0 = 1; u1 = 1
u(n+2) = u(n+1) + un
If this were simply asking a Fibonacci code, I would write like this:
def fibo_R(n):
if n == 1 or n == 2:
return 1
return fibo_R(n-1) + fibo_R(n-2)
print(fibo_R(6))
... However, in this exercise, the initial conditions are both 1 and 1, and the calculation is going towards the positive direction (+). I don't know how to set the end condition. I've searched for an answer, but I couldn't find any. How would you answer this?
Note that u_(n+2) = u_(n+1) + u_n is equivalent to u_n = u_(n-1) + u_(n-2), i.e. your previous code will still apply. Fibonacci numbers are by definition defined in terms of their predecessors, no matter how you phrase the problem.
A good approach to solve this is to define a generator which produces the elements of the Fibonacci sequence on demand:
def fibonacci():
i = 1
j = 1
while True:
yield i
x = i + j
i = j
j = x
You can then take the first N items of the generator via e.g. itertools.islice, or you use enumerate to keep track of how many numbers you saw:
for i, x in enumerate(fibonacci()):
if i > n:
break
print x
Having a generator means that you can use the same code for solving many different problems (and quite efficiently though), such as:
getting the n'th fibonacci number
getting the first n fibonacci numbers
getting all fibonacci numbers satisfying some predicate (e.g. all fibonacci numbers lower than 100)
The best way to calculate a fibonacci sequence is by simply starting at the beginning and looping until you have calculated the n-th number. Recursion produces way too many method calls since you are calculating the same numbers over and over again.
This function calculates the first n fibonacci numbers, stores them in a list and then prints them out:
def fibonacci(n):
array = [1]
a = 1
b = 1
if n == 1:
print array
for i in range(n-1):
fib = a + b
a = b
b = fib
array.append(fib)
print array
If you want a super memory-efficient solution, use a generator that only produces the next number on demand:
def fib_generator():
e1, e2 = 0, 1
while True:
e1,e2 = e2, e1+e2
yield e1
f = fib_generator()
print(next(f))
print(next(f))
print(next(f))
## dump the rest with a for-loop
for i in range(3, 50):
print(next(f))
The recursive solution is the most elegant, but it is slow. Keiwan's loop is the fastest for a large number of elements.
Yes, definitely no globals as correctly observed by DSM. Thanks!
An alternative recursive just to show that things can be done in slightly different ways:
def fib2(n): return n if n < 2 else fib2( n - 1 ) + fib2( n - 2 )

Subset sum Problem

recently I became interested in the subset-sum problem which is finding a zero-sum subset in a superset. I found some solutions on SO, in addition, I came across a particular solution which uses the dynamic programming approach. I translated his solution in python based on his qualitative descriptions. I'm trying to optimize this for larger lists which eats up a lot of my memory. Can someone recommend optimizations or other techniques to solve this particular problem? Here's my attempt in python:
import random
from time import time
from itertools import product
time0 = time()
# create a zero matrix of size a (row), b(col)
def create_zero_matrix(a,b):
return [[0]*b for x in xrange(a)]
# generate a list of size num with random integers with an upper and lower bound
def random_ints(num, lower=-1000, upper=1000):
return [random.randrange(lower,upper+1) for i in range(num)]
# split a list up into N and P where N be the sum of the negative values and P the sum of the positive values.
# 0 does not count because of additive identity
def split_sum(A):
N_list = []
P_list = []
for x in A:
if x < 0:
N_list.append(x)
elif x > 0:
P_list.append(x)
return [sum(N_list), sum(P_list)]
# since the column indexes are in the range from 0 to P - N
# we would like to retrieve them based on the index in the range N to P
# n := row, m := col
def get_element(table, n, m, N):
if n < 0:
return 0
try:
return table[n][m - N]
except:
return 0
# same definition as above
def set_element(table, n, m, N, value):
table[n][m - N] = value
# input array
#A = [1, -3, 2, 4]
A = random_ints(200)
[N, P] = split_sum(A)
# create a zero matrix of size m (row) by n (col)
#
# m := the number of elements in A
# n := P - N + 1 (by definition N <= s <= P)
#
# each element in the matrix will be a value of either 0 (false) or 1 (true)
m = len(A)
n = P - N + 1;
table = create_zero_matrix(m, n)
# set first element in index (0, A[0]) to be true
# Definition: Q(1,s) := (x1 == s). Note that index starts at 0 instead of 1.
set_element(table, 0, A[0], N, 1)
# iterate through each table element
#for i in xrange(1, m): #row
# for s in xrange(N, P + 1): #col
for i, s in product(xrange(1, m), xrange(N, P + 1)):
if get_element(table, i - 1, s, N) or A[i] == s or get_element(table, i - 1, s - A[i], N):
#set_element(table, i, s, N, 1)
table[i][s - N] = 1
# find zero-sum subset solution
s = 0
solution = []
for i in reversed(xrange(0, m)):
if get_element(table, i - 1, s, N) == 0 and get_element(table, i, s, N) == 1:
s = s - A[i]
solution.append(A[i])
print "Solution: ",solution
time1 = time()
print "Time execution: ", time1 - time0
I'm not quite sure if your solution is exact or a PTA (poly-time approximation).
But, as someone pointed out, this problem is indeed NP-Complete.
Meaning, every known (exact) algorithm has an exponential time behavior on the size of the input.
Meaning, if you can process 1 operation in .01 nanosecond then, for a list of 59 elements it'll take:
2^59 ops --> 2^59 seconds --> 2^26 years --> 1 year
-------------- ---------------
10.000.000.000 3600 x 24 x 365
You can find heuristics, which give you just a CHANCE of finding an exact solution in polynomial time.
On the other side, if you restrict the problem (to another) using bounds for the values of the numbers in the set, then the problem complexity reduces to polynomial time. But even then the memory space consumed will be a polynomial of VERY High Order.
The memory consumed will be much larger than the few gigabytes you have in memory.
And even much larger than the few tera-bytes on your hard drive.
( That's for small values of the bound for the value of the elements in the set )
May be this is the case of your Dynamic programing algorithm.
It seemed to me that you were using a bound of 1000 when building your initialization matrix.
You can try a smaller bound. That is... if your input is consistently consist of small values.
Good Luck!
Someone on Hacker News came up with the following solution to the problem, which I quite liked. It just happens to be in python :):
def subset_summing_to_zero (activities):
subsets = {0: []}
for (activity, cost) in activities.iteritems():
old_subsets = subsets
subsets = {}
for (prev_sum, subset) in old_subsets.iteritems():
subsets[prev_sum] = subset
new_sum = prev_sum + cost
new_subset = subset + [activity]
if 0 == new_sum:
new_subset.sort()
return new_subset
else:
subsets[new_sum] = new_subset
return []
I spent a few minutes with it and it worked very well.
An interesting article on optimizing python code is available here. Basically the main result is that you should inline your frequent loops, so in your case this would mean instead of calling get_element twice per loop, put the actual code of that function inside the loop in order to avoid the function call overhead.
Hope that helps! Cheers
, 1st eye catch
def split_sum(A):
N_list = 0
P_list = 0
for x in A:
if x < 0:
N_list+=x
elif x > 0:
P_list+=x
return [N_list, P_list]
Some advices:
Try to use 1D list and use bitarray to reduce memory footprint at minimum (http://pypi.python.org/pypi/bitarray) so you will just change get / set functon. This should reduce your memory footprint by at lest 64 (integer in list is pointer to integer whit type so it can be factor 3*32)
Avoid using try - catch, but figure out proper ranges at beginning, you might found out that you will gain huge speed.
The following code works for Python 3.3+ , I have used the itertools module in Python that has some great methods to use.
from itertools import chain, combinations
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
nums = input("Enter the Elements").strip().split()
inputSum = int(input("Enter the Sum You want"))
for i, combo in enumerate(powerset(nums), 1):
sum = 0
for num in combo:
sum += int(num)
if sum == inputSum:
print(combo)
The Input Output is as Follows:
Enter the Elements 1 2 3 4
Enter the Sum You want 5
('1', '4')
('2', '3')
Just change the values in your set w and correspondingly make an array x as big as the len of w then pass the last value in the subsetsum function as the sum for which u want subsets and you wl bw done (if u want to check by giving your own values).
def subsetsum(cs,k,r,x,w,d):
x[k]=1
if(cs+w[k]==d):
for i in range(0,k+1):
if x[i]==1:
print (w[i],end=" ")
print()
elif cs+w[k]+w[k+1]<=d :
subsetsum(cs+w[k],k+1,r-w[k],x,w,d)
if((cs +r-w[k]>=d) and (cs+w[k]<=d)) :
x[k]=0
subsetsum(cs,k+1,r-w[k],x,w,d)
#driver for the above code
w=[2,3,4,5,0]
x=[0,0,0,0,0]
subsetsum(0,0,sum(w),x,w,7)

Categories

Resources