Why recursive function is considering last element in an array twice? - python

Trying to understand recursive functions and so created this program but the output is incorrect. Would like to understand what am I doing wrong here
class Recursive:
def __init__(self, arr):
self.arr = arr
self.sum = 0
def sumRecursive(self):
if len(self.arr) == 0:
return self.sum
self.sum = self.arr.pop(0)
return self.sum + self.sumRecursive()
def main():
recur = Recursive([1,2,3])
print(recur.sumRecursive())
main()
output: 9

There are two types of recursion to consider: tail recursion, where the return value of a single recursive call is returned as-is, and "regular" recursion, where you do something with the return value(s) of the recursive call(s) before returning yourself.
You are combining the two. You either add a value from the list to the recursive sum, using no accumulator:
def non_tail_recursive(self):
if len(self.arr) == 0:
return 0
return self.arr.pop(0) + self.non_tail_recursive()
or you use an accumulator:
def tail_recursive(self):
if len(self.arr) == 0:
return self.sum
self.sum += self.arr.pop(0)
return self.tail_recursive()

You don't usually use an object with state to implement recursion. If you're keeping state, then you often don't need a recursive solution at all.
Here's how to do a "stateless" recursive sum.
def sumRecursive(arr):
if not arr:
return 0
return arr[0] + sumRecursive(arr[1:])
def main():
print(sumRecursive([1,2,3]))
main()

Your self.sum attribute is redundant. The information being processed by a recursive algorithm rarely needs members to pass information along.
class Recursive:
def __init__(self, arr):
self.arr = arr
def sumRecursive(self):
if not len(self.arr):
return 0
return self.arr.pop(0) + self.sumRecursive()
def main():
recur = Recursive([1,2,3])
print(recur.sumRecursive())
main()
Output: 6

To answer your question without rewriting your code (since obviously there are better ways to sum arrays), the answer is that the last leg of your recursion calls self.sum + self.sumRecursive which hits your sumRecursive function one last time which returns (from your if statement) self.sum which is the last element in your list (which you already summed).
Instead when the array is empty, return 0.
class Recursive:
def __init__(self, arr):
self.arr = arr
self.sum = 0
def sumRecursive(self):
if len(self.arr) == 0:
return 0
self.sum = self.arr.pop(0)
return self.sum + self.sumRecursive()
def main():
recur = Recursive([1,2,3,4])
print(recur.sumRecursive())
main()
Optionally move your if to the bottom where I personally think it makes more sense:
def sumRecursive(self):
self.sum = self.arr.pop(0)
if len(self.arr) == 0:
return self.sum
else:
return self.sum + self.sumRecursive()

fundamentals
You are making this more challenging for yourself because you are tangling your sum function with the class. The sum function simply needs to work on a list but the context of your object, self, is making it hard for you to focus.
Recursion is a functional heritage and this means writing your code in a functional and modular way. It's easier to read/write small, single-purpose functions and it promotes reuse within other areas of your program. Next time you need to sum a list in another class, do you want to rewrite sumRecursive again?
Write your summing function once and then import it where it's needed -
# mymath.py
def mysum(t):
if not t:
return 0
else:
return t.pop() + mysum(t)
See how mysum has no concern for context, self? Why should it? All it does it sum the elements of a list.
Now write your recursive module -
# recursive.py
from mymath import mysum
class Recursive:
def __init__(self, arr): self.arr = arr
def sum(self): return mysum(self.arr)
See how Recursive.sum just hands off self.arr to mysum? It doesn't have to be more complicated than that. mysum will work on every list, not just lists within your Recursive module. We don't have to know how mysum works, that is the concern of the mymath module.
Now we write the main module. Each module represents a barrier of abstraction. This means that the details of a module shouldn't spill over into other modules. We don't know how Recursive actually sums the input, and from the caller's point of view, we don't care. That is the concern of Recursive module.
# main.py
from recursive import Recursive
recur = Recursive([1,2,3])
print(recur.sum())
6
going functional
Above we wrote mysum using the .pop technique in your question. I did this because that seems to be how you are understanding the problem right now. But watch what happens when we do this -
x = [1,2,3]
print(mysum(x)) # 6
print(mysum(x)) # 0
Why does the mysum return a different answer the second time? Because as it is written now, mysum uses t.pop(), which mutates t. When mysum is finished running, t is completely emptied!
By why would we write our function like this? What if 5 + x returned a different result each time we called it?
x = 3
print(5 + x) # 8
print(5 + x) # 8
How annoying it would be if we could not depend on values not to change. The sum of the input, [1,2,3], is 6. But as it is written, the sum of the input is to return 6 and empty the input. This second part of emptying (changing) the input is known as a side effect. Ie, the desired effect is to sum and emptying of the input list is unintended but a consequence of us using .pop to calculate the result. This is not the functional way. Functional style means avoiding mutations, variable reassignments, and other side effects.
def mysum(t):
if not t:
return 0
else:
return t[0] + mysum(t[1:])
When written in this way, t, is not changed. This allows us to use equational reasoning whereby we can substitute any function call for its return value and always get the correct answer
x = [1,2,3]
mysum(x)
== 1 + mysum([2,3])
== 1 + 2 + mysum([3])
== 1 + 2 + 3 + mysum([])
== 1 + 2 + 3 + 0
== 1 + 2 + 3
== 1 + 5
== 6
And x was not changed as a result of running mysum -
print(x)
# [1,2,3]
Note, the Recursive module does not need to make any change to receive the benefit of rewriting mysum in this way. Before the change, we would've seen this behavior -
# main.py
from recursive import Recursive
recur = Recursive([1,2,3])
print(recur.sum())
print(recur.sum())
6
0
Because the first call to sum passes self.arr to mysum which empties self.arr as a side effect. A second call to recur.sum() will sum an empty self.arr! After fixing mysum we get the intended behaviour -
# main.py
from recursive import Recursive
recur = Recursive([1,2,3])
print(recur.sum())
print(recur.sum())
6
6
additional reading
I've written extensively about the techniques used in this answer. Follow the links to see them used in other contexts with additional explanation provided -
I want to reverse the stack but i dont know how to use recursion for reversing this… How can i reverse the stack without using Recursion
Finding all maze solutions with Python
Return middle node of linked list with recursion
How do i recursively find a size of subtree based on any given node? (BST)
Deleting node in BST Python

Related

Turning a recursive function into an iterative function

I have written the following recursive function, but am incurring a runtime error due to maximum recursion depth. I was wondering is it possible to write an iterative function to overcome this:
def finaldistance(n):
if n%2 == 0:
return 1 + finaldistance(n//2)
elif n != 1:
a = finaldistance(n-1)+1
b = distance(n)
return min(a,b)
else:
return 0
What I have tried is this but it does not seem to be working,
def finaldistance(n, acc):
while n > 1:
if n%2 == 0:
(n, acc) = (n//2, acc+1)
else:
a = finaldistance(n-1, acc) + 1
b = distance(n)
if a < b:
(n, acc) = (n-1, acc+1)
else:
(n, acc) =(1, acc + distance(n))
return acc
Johnbot's solution shows you how to solve your specific problem. How in general can we remove this recursion? Let me show you how, by making a series of small, clearly correct, clearly safe refactorings.
First, here's a slightly rewritten version of your function. I hope you agree it is the same:
def f(n):
if n % 2 == 0:
return 1 + f(n // 2)
elif n != 1:
a = f(n - 1) + 1
b = d(n)
return min(a, b)
else:
return 0
I want the base case to be first. This function is logically the same:
def f(n):
if n == 1:
return 0
if n % 2 == 0:
return 1 + f(n // 2)
a = f(n - 1) + 1
b = d(n)
return min(a, b)
I want the code that comes after each recursive call to be a method call and nothing else. These functions are logically the same:
def add_one(n, x):
return 1 + x
def min_distance(n, x):
a = x + 1
b = d(n)
return min(a, b)
def f(n):
if n == 1:
return 0
if n % 2 == 0:
return add_one(n, f(n // 2))
return min_distance(n, f(n - 1))
Similarly, we add helper functions that compute the recursive argument:
def half(n):
return n // 2
def less_one(n):
return n - 1
def f(n):
if n == 1:
return 0
if n % 2 == 0:
return add_one(n, f(half(n))
return min_distance(n, f(less_one(n))
Again, make sure you agree that this program is logically the same. Now I'm going to simplify the computation of the argument:
def get_argument(n):
return half if n % 2 == 0 else less_one
def f(n):
if n == 1:
return 0
argument = get_argument(n) # argument is a function!
if n % 2 == 0:
return add_one(n, f(argument(n)))
return min_distance(n, f(argument(n)))
Now I'm going to do the same thing to the code after the recursion, and we'll get down to a single recursion:
def get_after(n):
return add_one if n % 2 == 0 else min_distance
def f(n):
if n == 1:
return 0
argument = get_argument(n)
after = get_after(n) # this is also a function!
return after(n, f(argument(n)))
Now I'm noticing that we're passing n to get_after, and then passing it right along to "after" again. I'm going to curry these functions to eliminate that problem. This step is tricky. Make sure you understand it!
def add_one(n):
return lambda x: x + 1
def min_distance(n):
def nested(x):
a = x + 1
b = d(n)
return min(a, b)
return nested
These functions did take two arguments. Now they take one argument, and return a function that takes one argument! So we refactor the use site:
def get_after(n):
return add_one(n) if n % 2 == 0 else min_distance(n)
and here:
def f(n):
if n == 1:
return 0
argument = get_argument(n)
after = get_after(n) # now this is a function of one argument, not two
return after(f(argument(n)))
Similarly we notice that we are calling get_argument(n)(n) to get the argument. Let's simplify that:
def get_argument(n):
return half(n) if n % 2 == 0 else less_one(n)
And let's make it just slightly more general:
base_case_value = 0
def is_base_case(n):
return n == 1
def f(n):
if is_base_case(n):
return base_case_value
argument = get_argument(n)
after = get_after(n)
return after(f(argument))
OK, we now have our program in an extremely compact form. The logic has been spread out into multiple functions, and some of them are curried, to be sure. But now that the function is in this form we can easily remove the recursion. This is the bit that is really tricky is turning the whole thing into an explicit stack:
def f(n):
# Let's make a stack of afters.
afters = [ ]
while not is_base_case(n) :
argument = get_argument(n)
after = get_after(n)
afters.append(after)
n = argument
# Now we have a stack of afters:
x = base_case_value
while len(afters) != 0:
after = afters.pop()
x = after(x)
return x
Study this implementation very carefully. You will learn a lot from it. Remember, when you do a recursive call:
after(f(something))
you are saying that after is the continuation -- the thing that comes next -- of the call to f. We typically implement continuations by putting information about the location in the callers code onto the "call stack". What we're doing in this removal of recursion is simply moving continuation information off of the call stack and onto a stack data structure. But the information is exactly the same.
The important thing to realize here is that we typically think of the call stack as "what is the thing that happened in the past that got me here?". That is exactly backwards. The call stack tells you what you have to do after this call is finished! So that's the information that we encode in the explicit stack. Nowhere do we encode what we did before each step as we "unwind the stack", because we don't need that information.
As I said in my initial comment: there is always a way to turn a recursive algorithm into an iterative one but it is not always easy. I've shown you here how to do it: carefully refactor the recursive method until it is extremely simple. Get it down to a single recursion by refactoring it. Then, and only then, apply this transformation to get it into an explicit stack form. Practice that until you are comfortable with this program transformation. You can then move on to more advanced techniques for removing recursions.
Note that of course this is almost certainly not the "pythonic" way to solve this problem; you could likely build a much more compact, understandable method using lazily evaluated list comprehensions. This answer was intended to answer the specific question that was asked: how in general do we turn recursive methods into iterative methods?
I mentioned in a comment that a standard technique for removing a recursion is to build an explicit list as a stack. This shows that technique. There are other techniques: tail recursion, continuation passing style and trampolines. This answer is already too long, so I'll cover those in a follow-up answer.
Read this answer after you read my first answer.
Again, we are answering the question in general of "how do you turn a recursive algorithm into an iterative algorithm", in this case in Python. As noted previously, this is about exploring the general idea of transforming a program; this is not the "pythonic" way to solve the specific problem.
In my first answer I started by rewriting the program into this form:
def f(n):
if is_base_case(n):
return base_case_value
argument = get_argument(n)
after = get_after(n)
return after(f(argument))
And then transformed it into this form:
def f(n):
# Let's make a stack of afters.
afters = [ ]
while not is_base_case(n) :
argument = get_argument(n)
after = get_after(n)
afters.append(after)
n = argument
# Now we have a stack of afters:
x = base_case_value
while len(afters) != 0:
after = afters.pop()
x = after(x)
return x
The technique here is to construct an explicit stack of "after" calls for a particular input, and then once we have it, run down the whole stack. We are essentially simulating what the runtime already does: constructs a stack of "continuations" that say what to do next.
A different technique is to let the function itself decide what to do with its continuation; this is called "continuation passing style". Let's explore it.
This time, we're going to add a parameter c to the recursive method f. c is a function that takes what would normally be the return value of f, and does whatever was suppose to happen after the call to f. That is, it is explicitly the continuation of f. The method f then becomes "void returning".
The base case is easy. What do we do if we're in the base case? We call the continuation with the value we would have returned:
def f(n, c):
if is_base_case(n):
c(base_case_value)
return
Easy peasy. What about the non-base case? Well, what were we going to do in the original program? We were going to (1) get the arguments, (2) get the "after" -- the continuation of the recursive call, (3) do the recursive call, (4) call "after", its continuation, and (5) return the computed value to whatever the continuation of f is.
We're going to do all the same things, except that when we do step (3) now we need to pass in a continuation that does steps 4 and 5:
argument = get_argument(n)
after = get_after(n)
f(argument, lambda x: c(after(x)))
Hey, that is so easy! What do we do after the recursive call? Well, we call after with the value returned by the recursive call. But now that value is going to be passed to the recursive call's continuation function, so it just goes into x. What happens after that? Well, whatever was going to happen next, and that's in c, so it needs to be called, and we're done.
Let's try it out. Previously we would have said
print(f(100))
but now we have to pass in what happens after f(100). Well, what happens is, the value gets printed!
f(100, print)
and we're done.
So... big deal. The function is still recursive. Why is this interesting? Because the function is now tail recursive! That is, the last thing it does in the non-base case is call itself. Consider a silly case:
def tailcall(x, sum):
if x <= 0:
return sum
return tailcall(x - 1, sum + x)
If we call tailcall(10, 0) it calls tailcall(9, 10), which calls (8, 19), and so on. But any tail-recursive method we can rewrite into a loop very, very easily:
def tailcall(x, sum):
while True:
if x <= 0:
return sum
x = x - 1
sum = sum + x
So can we do the same thing with our general case?
# This is wrong!
def f(n, c):
while True:
if is_base_case(n):
c(base_case_value)
return
argument = get_argument(n)
after = get_after(n)
n = argument
c = lambda x: c(after(x))
Do you see what is wrong? the lambda is closed over c and after, which means that every lambda will use the current value of c and after, not the value it had when the lambda was created. So this is broken, but we can fix it easily by creating a scope which introduces new variables every time it is invoked:
def continuation_factory(c, after)
return lambda x: c(after(x))
def f(n, c):
while True:
if is_base_case(n):
c(base_case_value)
return
argument = get_argument(n)
after = get_after(n)
n = argument
c = continuation_factory(c, after)
And we're done! We've turned this recursive algorithm into an iterative algorithm.
Or... have we?
Think about this really carefully before you read on. Your spider sense should be telling you that something is wrong here.
The problem we started with was that a recursive algorithm is blowing the stack. We've turned this into an iterative algorithm -- there's no recursive call at all here! We just sit in a loop updating local variables.
The question though is -- what happens when the final continuation is called, in the base case? What does that continuation do? Well, it calls its after, and then it calls its continuation. What does that continuation do? Same thing.
All we've done here is moved the recursive control flow into a collection of function objects that we've built up iteratively, and calling that thing is still going to blow the stack. So we haven't actually solved the problem.
Or... have we?
What we can do here is add one more level of indirection, and that will solve the problem. (This solves every problem in computer programming except one problem; do you know what that problem is?)
What we'll do is we'll change the contract of f so that it is no longer "I am void-returning and will call my continuation when I'm done". We will change it to "I will return a function that, when it is called, calls my continuation. And furthermore, my continuation will do the same."
That sounds a little tricky but really its not. Again, let's reason it through. What does the base case have to do? It has to return a function which, when called, calls my continuation. But my continuation already meets that requirement:
def f(n, c):
if is_base_case(n):
return c(base_case_value)
What about the recursive case? We need to return a function, which when called, executes the recursion. The continuation of that call needs to be a function that takes a value and returns a function that when called executes the continuation on that value. We know how to do that:
argument = get_argument(n)
after = get_after(n)
return lambda : f(argument, lambda x: lambda: c(after(x)))
OK, so how does this help? We can now move the loop into a helper function:
def trampoline(f, n, c):
t = f(n, c)
while t != None:
t = t()
And call it:
trampoline(f, 3, print)
And holy goodness it works.
Follow along what happens here. Here's the call sequence with indentation showing stack depth:
trampoline(f, 3, print)
f(3, print)
What does this call return? It effectively returns lambda : f(2, lambda x: lambda : print(min_distance(x)), so that's the new value of t.
That's not None, so we call t(), which calls:
f(2, lambda x: lambda : print(min_distance(x))
What does that thing do? It immediately returns
lambda : f(1,
lambda x:
lambda:
(lambda x: lambda : print(min_distance(x)))(add_one(x))
So that's the new value of t. It's not None, so we invoke it. That calls:
f(1,
lambda x:
lambda:
(lambda x: lambda : print(min_distance(x)))(add_one(x))
Now we're in the base case, so we *call the continuation, substituting 0 for x. It returns:
lambda: (lambda x: lambda : print(min_distance(x)))(add_one(0))
So that's the new value of t. It's not None, so we invoke it.
That calls add_one(0) and gets 1. It then passes 1 for x in the middle lambda. That thing returns:
lambda : print(min_distance(1))
So that's the new value of t. It's not None, so we invoke it. And that calls
print(min_distance(1))
Which prints out the correct answer, print returns None, and the loop stops.
Notice what happened there. The stack never got more than two deep because every call returned a function that said what to do next to the loop, rather than calling the function.
If this sounds familiar, it should. Basically what we're doing here is making a very simple work queue. Every time we "enqueue" a job, it is immediately dequeued, and the only thing the job does is enqueues the next job by returning a lambda to the trampoline, which sticks it in its "queue", the variable t.
We break the problem up into little pieces, and make each piece responsible for saying what the next piece is.
Now, you'll notice that we end up with arbitrarily deep nested lambdas, just as we ended up in the previous technique with an arbitrarily deep queue. Essentially what we've done here is moved the workflow description from an explicit list into a network of nested lambdas, but unlike before, this time we've done a little trick to avoid those lambdas ever calling each other in a manner that increases the stack depth.
Once you see this pattern of "break it up into pieces and describe a workflow that coordinates execution of the pieces", you start to see it everywhere. This is how Windows works; each window has a queue of messages, and messages can represent portions of a workflow. When a portion of a workflow wishes to say what the next portion is, it posts a message to the queue, and it runs later. This is how async await works -- again, we break up the workflow into pieces, and each await is the boundary of a piece. It's how generators work, where each yield is the boundary, and so on. Of course they don't actually use trampolines like this, but they could.
The key thing to understand here is the notion of continuation. Once you realize that you can treat continuations as objects that can be manipulated by the program, any control flow becomes possible. Want to implement your own try-catch? try-catch is just a workflow where every step has two continuations: the normal continuation and the exceptional continuation. When there's an exception, you branch to the exceptional continuation instead of the regular continuation. And so on.
The question here was again, how do we eliminate an out-of-stack caused by a deep recursion in general. I've shown that any recursive method of the form
def f(n):
if is_base_case(n):
return base_case_value
argument = get_argument(n)
after = get_after(n)
return after(f(argument))
...
print(f(10))
can be rewritten as:
def f(n, c):
if is_base_case(n):
return c(base_case_value)
argument = get_argument(n)
after = get_after(n)
return lambda : f(argument, lambda x: lambda: c(after(x)))
...
trampoline(f, 10, print)
and that the "recursive" method will now use only a very small, fixed amount of stack.
First you need to find all the values of n, luckily your sequence is strictly descending and only depends on the next distance:
values = []
while n > 1:
values.append(n)
n = n // 2 if n % 2 == 0 else n - 1
Next you need to calculate the distance at each value. To do that we need to start from the buttom:
values.reverse()
And now we can easily keep track of the previous distance if we need it to calculate the next distance.
distance_so_far = 0
for v in values:
if v % 2 == 0:
distance_so_far += 1
else:
distance_so_far = min(distance(v), distance_so_far + 1)
return distance_so_far
Stick it all together:
def finaldistance(n):
values = []
while n > 1:
values.append(n)
n = n // 2 if n % 2 == 0 else n - 1
values.reverse()
distance_so_far = 0
for v in values:
if v % 2 == 0:
distance_so_far += 1
else:
distance_so_far = min(distance(v), distance_so_far + 1)
return distance_so_far
And now you're using memory instead of stack.
(I don't program in Python so this is probably not be idiomatic Python)

Difference between Memoization Implementations - Python

What is the difference (if any exists) between these memoization implementations? Is there a use case where one is preferable to the other? (I included this Fibo recursion as an example)
Put another way: is there a difference between checking if some_value in self.memo: and if some_value not in self.memo:, and if so, is there a case where one presents a better implementation (better optimized for performance, etc.)?
class Fibo:
def __init__(self):
self.memo = {}
"""Implementation 1"""
def fib1(self, n):
if n in [0,1]:
return n
if n in self.memo:
return self.memo[n]
result = self.fib1(n - 1) + self.fib1(n - 2)
self.memo[n] = result
return result
"""Implementation 2"""
def fib2(self, n):
if n in [0,1]:
return n
if n not in self.memo:
result = self.fib2(n - 1) + self.fib2(n - 2)
self.memo[n] = result
return self.memo[n]
# Fibo().fib1(8) returns 21
# Fibo().fib2(8) returns 21
There is no significant performance difference in these implementations. In my opinion fib2 is a more readable/pythonic implementation, and should be preferred.
One other recommendation I would make, is to initialise the memo in __init__ like this:
self.memo = {0:0, 1:1}
This avoids the need to make a conditional check inside each and every call, you can simply remove the first two lines of the fib method now.

Python: decorating simple recursive function

I wanted to practice recursive and decorators and try to do this simple function but it doesn't work:
def dec(func):
def wrapper(number):
print("Recursive count:")
rec_cou(number)
return wrapper
#dec
def rec_cou(number):
""" Count from 0 to a given number from 50 and up """
if number == 0:
print(number)
return number
num = rec_cou(number - 1)
print(num + 1)
return num + 1
rec_cou(53)
The recursive function alone works well, but when i add the decorator generates error: maximun recursion depth exceeded
There are two problems with your decorator:
You try to call the decorated function, effectively invoking the wrapper function again inside the decorator, thus you have an infinite recursive loop; call the original function func instead.
To the outside, the decorated function should behave just like the original function, particularly it should return its result; otherwise you will get type errors for trying to add numbers and None
Also, currently your decorator is not counting anything... try this:
def dec(func):
func.count = 0 # give each decorated function its own counter
def wrapper(number):
print("Recursive count: %d" % func.count)
func.count += 1 # increase counter
return func(number) # call original function 'func' and return result
return wrapper
Update: From your comments, it seems I misunderstood what your decorator is supposed to do, and you misunderstood how decorators work. The decorator is not called once when you first call the function, but it replaces the function with the one defined within the decorator. In other words,
#dec
def foo(...):
...
is equivalent to
def foo(...):
...
foo = dec(foo)
I.e. the decorator in invoked exactly once when the function is decorated, and the function constructed in the decorator is called each time the original function is called, replacing it. If you want to print only once, either use the decorator from the other answer, or rather use no decorator at all: Just create a wrapper that prints and then calls the function. This is not unusual for providing an 'entry point' to recursive functions.
def print_and_run(number):
print("Recursive count:")
rec_cou(number)
BTW, this is the decorator that I usually use to visualize recursive calls:
def trace(f):
trace.depth = 0
def _f(*args, **kwargs):
print " " * trace.depth, ">", f.__name__, args, kwargs
trace.depth += 1
res = f(*args, **kwargs)
trace.depth -= 1
print " " * trace.depth, "<", res
return res
return _f
To solve the maximum recursion depth problem, call the function passed into the decorator (func) rather than rec_cou and return the value of the function call. That is, on line 5, replace rec_cou(number) with return func(number).
Edit:
def decorate(function):
def wrapper(parameter):
if wrapper.initial:
print("Recursive count:")
wrapper.initial = False
result = function(parameter)
wrapper.initial = True
return result
wrapper.initial = True
return wrapper
#decorate
def count(number):
""" Prints integers on the interval [0, number] """
if number:
count(number - 1)
print(number)
count(53)
Without decorator:
def count(number):
""" Prints integers on the interval [0, number] """
if number:
count(number - 1)
else:
print("Recursive count:")
print(number)
count(53)
If all you want is for the function rec_cou to print something before its recursive descent, just modify that function and don't bother with decorators.
def rec_cou(number, internal_call=False):
""" Count from 0 to a given number from 50 and up """
if not internal_call:
print "Now performing recursive count, starting with %d" % number
if number == 0:
return number
num = rec_cou(number - 1, internal_call=True)
return num + 1
As I mentioned in my comments, all I've done is take the idea behind Joel's answer (which was to add a variable--which I called a "flag"--indicating whether the function is being called externally or as part of the recursion) and moved the flag variable (which I've called internal_call, whereas Joel called it initial) inside the function itself.
Additionally, I'm not sure what all this num business is about. Note that:
For the 0 case, rec_cou returns 0.
For number > 0, num is set to the value returned by rec_cou(number-1), then 1+num is returned.
For example, in the case of rec_cou(1), num is set to rec_cou(0), which is 0, then 0 + 1 is returned, which is 1. Similarly, for rec_cou(2), one more than the value of rec_cou(1) is returned, so 2 is returned.
In short, for every natural number, rec_cou(number) returns the value of the input number. It's not clear what you're trying to achieve, but what you've got is an identity function, which seems unlikely to be what you want.

Global variables in recursion. Python

OK, i'm using Python 2.7.3 and here is my code:
def lenRecur(s):
count = 0
def isChar(c):
c = c.lower()
ans=''
for s in c:
if s in 'abcdefghijklmnopqrstuvwxyz':
ans += s
return ans
def leng(s):
global count
if len(s)==0:
return count
else:
count += 1
return leng(s[1:])
return leng(isChar(s))
I'm trying to modify the variable count inside the leng function. Here are the things that I've tried:
If I put the variable count outside the lenRecur function it works fine the first time, but if I try again without restarting python shell, the count (obviously) doesn't restart, so it keeps adding.
If I change the count += 1 line for count = 1 it also works, but the output is (obviously) one.
So, my goal here is to get the length of the string using recursion, but I don't know how to keep track of the number of letters. I've searched for information about global variables, but I am still stuck. I don't know if i haven't understood it yet, or if I have a problem in my code.
Thanks in advance!
count in lenRecur is not a global. It is a scoped variable.
You'll need to use Python 3 before you can make that work in this way; you are looking for the nonlocal statement added to Python 3.
In Python 2, you can work around this limitation by using a mutable (such as a list) for count instead:
def lenRecur(s):
count = [0]
# ...
def leng(s):
if len(s)==0:
return count[0]
else:
count[0] += 1
return lenIter(s[1:])
Now you are no longer altering the count name itself; it remains unchanged, it keeps referring to the same list. All you are doing is altering the first element contained in the count list.
An alternative 'spelling' would be to make count a function attribute:
def lenRecur(s):
# ...
def leng(s):
if len(s)==0:
return leng.count
else:
leng.count += 1
return lenIter(s[1:])
leng.count = 0
Now count is no longer local to lenRecur(); it has become an attribute on the unchanging lenRecur() function instead.
For your specific problem, you are actually overthinking things. Just have the recursion do the summing:
def lenRecur(s):
def characters_only(s):
return ''.join([c for c in s if c.isalpha()])
def len_recursive(s):
if not s:
return 0
return 1 + len_recursive(s[1:])
return len_recursive(characters_only(s))
Demo:
>>> def lenRecur(s):
... def characters_only(s):
... return ''.join([c for c in s if c.isalpha()])
... def len_recursive(s):
... if not s:
... return 0
... return 1 + len_recursive(s[1:])
... return len_recursive(characters_only(s))
...
>>> lenRecur('The Quick Brown Fox')
16
I think You can pass count as second argument
def anything(s):
def leng(s, count):
if not s:
return count
return leng(s[1:], count + 1)
return leng(isChar(s), 0)
this should work better than muting objects from outer scope such as using mutable objects (list or dict) or monkey-patching function itself for example.
You need to make the variable count a function variable like
def lenRecur(s):
lenRecur.count = 0
However, I see a few problems with the code.
1) If you are trying to find the number of alphabets in a string through recursion, this one will do:
def lenRecur(s):
def leng(s, count = 0):
if not s:
return count
else:
count += int(s[0].isalpha())
return leng(s[1:], count)
return leng(s)
But still I would prefer having a single function to do the task, like there will be no leng method at all.
2) If your goal is just to find the number of alphabets in a string, I would prefer list comprehension
def alphalen(s):
return sum([1 for ch in s if ch.isalpha()])
If this is anything other than learning purpose, I suggest you to avoid recursion. Because, the solution cannot be used for larger strings(lets say, finding the alphabet count from contents of a file). You might hit the RunTimeError of Maximum Recursion Depth Exceeded.
Even though you can work around this through setting the recursion depth through setrecursionlimit function, I suggest you to go for other easy ways. More info on setting the recursionlimit here.
Define it outside all function definitions, if you want to use it as a global variable:
count = 0
def lenRecur(s):
or define it as a function attribute:
def lenRecur(s):
lenRecur.count = 0
def isChar(c):
This has been fixed in py3.x where you can use the nonlocal statement:
def leng(s):
nonlocal count
if len(s)==0:
You don't need count. The below function should work.
def leng(s):
if not s:
return 0
return 1 + leng(s[1:])
Global variable in recursion is very tricky as the depth reaches to its last state and starts to return back to the first recursive call the values of local variables change so we use global variables. the issue with global variables is that when u run the func multiple times the global variable doesn't reset.

Python: Is math.factorial memoized?

I am solving a problem in three different ways, two are recursive and I memoize them myself. The other is not recursive but uses math.factorial. I need to know if I need to add explicit memoization to it.
Thanks.
Search for math_factorial on this link and you will find its implementation in python:
http://svn.python.org/view/python/trunk/Modules/mathmodule.c?view=markup
P.S. This is for python2.6
Python's math.factorial is not memoized, it is a simple for loop multiplying the values from 1 to your arg. If you need memoization, you need to do it explicitly.
Here is a simple way to memoize using dictionary setdefault method.
import math
cache = {}
def myfact(x):
return cache.setdefault(x,math.factorial(x))
print myfact(10000)
print myfact(10000)
Python's math.factorial is not memoized.
I'm going to guide you through some trial and error examples to see why to get a really memoized and working factorial function you have to redefine it ex-novo taking into account a couple of things.
The other answer actually is not correct. Here,
import math
cache = {}
def myfact(x):
return cache.setdefault(x,math.factorial(x))
the line
return cache.setdefault(x,math.factorial(x))
computes both x and math.factorial(x) every time and therefore you gain no performance improvement.
You may think of doing something like this:
if x not in cache:
cache[x] = math.factorial(x)
return cache[x]
but actually this is wrong as well. Yes, you avoid computing again the factorial of a same x but think, for example, if you are going to calculate myfact(1000) and soon after that myfact(999). Both of them gets calculated completely thus not taking any advantage from the fact that myfact(1000) automatically computes myfact(999).
It comes natural then to write something like this:
def memoize(f):
"""Returns a memoized version of f"""
memory = {}
def memoized(*args):
if args not in memory:
memory[args] = f(*args)
return memory[args]
return memoized
#memoize
def my_fact(x):
assert x >= 0
if x == 0:
return 1
return x * my_fact(x - 1)
This is going to work. Unfortunately it soon reaches the maximum recursion depth.
So how to implement it?
Here is an example of truly memoized factorial, that takes advantage of how factorials work and does not consumes all the stack with recursive calls:
# The 'max' key stores the maximum number for which the factorial is stored.
fact_memory = {0: 1, 1: 1, 'max': 1}
def my_fact(num):
# Factorial is defined only for non-negative numbers
assert num >= 0
if num <= fact_memory['max']:
return fact_memory[num]
for x in range(fact_memory['max']+1, num+1):
fact_memory[x] = fact_memory[x-1] * x
fact_memory['max'] = num
return fact_memory[num]
I hope you find this useful.
EDIT:
Just as a note, a way to achieve this same optimization having at the same time the conciseness and elegance of recursion would be to redefine the function as a tail-recursive function.
def memoize(f):
"""Returns a memoized version of f"""
memory = {}
def memoized(*args):
if args not in memory:
memory[args] = f(*args)
return memory[args]
return memoized
#memoize
def my_fact(x, fac=1):
assert x >= 0
if x < 2:
return fac
return my_fact(x-1, x*fac)
Tail recursion functions in fact can be recognized by the interpreter/compiler and be automagically translated/optimized to an iterative version, but not all interpreters/compilers support this.
Unfortunately python does not support tail recursion optimization, so you still get:
RuntimeError: maximum recursion depth exceeded
when the input of my_fact is high.
I'm late to the party, yet here are my 2c on implementing an efficient memoized factorial function in Python. This approach is more efficient since it relies on an array-like structure (that is list) rather than a hashed container (that is dict). No recursion involved (spares you some Python function-call overhead) and no slow for-loops involved. And it is (arguably) functionally-pure as there are no outer side-effects involved (that is it doesn't modify a global variable). It caches all intermediate factorials, hence if you've already calculated factorial(n), it will take you O(1) to calculate factorial(m) for any 0 <= m <= n and O(m-n) for any m > n.
def inner_func(f):
return f()
#inner_func
def factorial():
factorials = [1]
def calculate_factorial(n):
assert n >= 0
return reduce(lambda cache, num: (cache.append(cache[-1] * num) or cache),
xrange(len(factorials), n+1), factorials)[n]
return calculate_factorial

Categories

Resources