I am solving a problem in three different ways, two are recursive and I memoize them myself. The other is not recursive but uses math.factorial. I need to know if I need to add explicit memoization to it.
Thanks.
Search for math_factorial on this link and you will find its implementation in python:
http://svn.python.org/view/python/trunk/Modules/mathmodule.c?view=markup
P.S. This is for python2.6
Python's math.factorial is not memoized, it is a simple for loop multiplying the values from 1 to your arg. If you need memoization, you need to do it explicitly.
Here is a simple way to memoize using dictionary setdefault method.
import math
cache = {}
def myfact(x):
return cache.setdefault(x,math.factorial(x))
print myfact(10000)
print myfact(10000)
Python's math.factorial is not memoized.
I'm going to guide you through some trial and error examples to see why to get a really memoized and working factorial function you have to redefine it ex-novo taking into account a couple of things.
The other answer actually is not correct. Here,
import math
cache = {}
def myfact(x):
return cache.setdefault(x,math.factorial(x))
the line
return cache.setdefault(x,math.factorial(x))
computes both x and math.factorial(x) every time and therefore you gain no performance improvement.
You may think of doing something like this:
if x not in cache:
cache[x] = math.factorial(x)
return cache[x]
but actually this is wrong as well. Yes, you avoid computing again the factorial of a same x but think, for example, if you are going to calculate myfact(1000) and soon after that myfact(999). Both of them gets calculated completely thus not taking any advantage from the fact that myfact(1000) automatically computes myfact(999).
It comes natural then to write something like this:
def memoize(f):
"""Returns a memoized version of f"""
memory = {}
def memoized(*args):
if args not in memory:
memory[args] = f(*args)
return memory[args]
return memoized
#memoize
def my_fact(x):
assert x >= 0
if x == 0:
return 1
return x * my_fact(x - 1)
This is going to work. Unfortunately it soon reaches the maximum recursion depth.
So how to implement it?
Here is an example of truly memoized factorial, that takes advantage of how factorials work and does not consumes all the stack with recursive calls:
# The 'max' key stores the maximum number for which the factorial is stored.
fact_memory = {0: 1, 1: 1, 'max': 1}
def my_fact(num):
# Factorial is defined only for non-negative numbers
assert num >= 0
if num <= fact_memory['max']:
return fact_memory[num]
for x in range(fact_memory['max']+1, num+1):
fact_memory[x] = fact_memory[x-1] * x
fact_memory['max'] = num
return fact_memory[num]
I hope you find this useful.
EDIT:
Just as a note, a way to achieve this same optimization having at the same time the conciseness and elegance of recursion would be to redefine the function as a tail-recursive function.
def memoize(f):
"""Returns a memoized version of f"""
memory = {}
def memoized(*args):
if args not in memory:
memory[args] = f(*args)
return memory[args]
return memoized
#memoize
def my_fact(x, fac=1):
assert x >= 0
if x < 2:
return fac
return my_fact(x-1, x*fac)
Tail recursion functions in fact can be recognized by the interpreter/compiler and be automagically translated/optimized to an iterative version, but not all interpreters/compilers support this.
Unfortunately python does not support tail recursion optimization, so you still get:
RuntimeError: maximum recursion depth exceeded
when the input of my_fact is high.
I'm late to the party, yet here are my 2c on implementing an efficient memoized factorial function in Python. This approach is more efficient since it relies on an array-like structure (that is list) rather than a hashed container (that is dict). No recursion involved (spares you some Python function-call overhead) and no slow for-loops involved. And it is (arguably) functionally-pure as there are no outer side-effects involved (that is it doesn't modify a global variable). It caches all intermediate factorials, hence if you've already calculated factorial(n), it will take you O(1) to calculate factorial(m) for any 0 <= m <= n and O(m-n) for any m > n.
def inner_func(f):
return f()
#inner_func
def factorial():
factorials = [1]
def calculate_factorial(n):
assert n >= 0
return reduce(lambda cache, num: (cache.append(cache[-1] * num) or cache),
xrange(len(factorials), n+1), factorials)[n]
return calculate_factorial
Related
This question already has answers here:
Memoization fibonacci algorithm in python
(5 answers)
Closed 2 years ago.
I'm working on a problem in codewars that wants you to memoize The Fibonacci sequence. My solution so far has been:
def fibonacci(n):
return fibonacci_helper(n, dict())
def fibonacci_helper(n, fib_nums):
if n in [0, 1]:
return fib_nums.setdefault(n, n)
fib1 = fib_nums.setdefault(n - 1, fibonacci_helper(n - 1, fib_nums))
fib2 = fib_nums.setdefault(n - 2, fibonacci_helper(n - 2, fib_nums))
return fib_nums.setdefault(n, fib1 + fib2)
It works reasonably well for small values of n, but slows down significantly beyond the 30 mark, which made me wonder — is this solution even memoized? How would I get this type of solution working fast enough for large values of n?
Your function isn't memoized (at least not effectively) because you call fibonacci_helper regardless of whether you already have a memoized value. This is because setdefault doesn't do any magic that would prevent the arguments from being evaluated before they're passed into the function -- you make the recursive call before the dict has checked to see whether it contains the value.
The point of memoization is to be careful to avoid doing the computation (in this case a lengthy recursive call) in cases where you already know the answer.
The way to fix this implementation would be something like:
def fibonacci(n):
return fibonacci_helper(n, {0: 0, 1: 1})
def fibonacci_helper(n, fib_nums):
if n not in fib_nums:
fib1 = fibonacci_helper(n-1, fib_nums)
fib2 = fibonacci_helper(n-2, fib_nums)
fib_nums[n] = fib1 + fib2
return fib_nums[n]
If you're allowed to not reinvent the wheel, you could also just use functools.lru_cache, which adds memoization to any function through the magic of decorators:
from functools import lru_cache
#lru_cache
def fibonacci(n):
if n in {0, 1}:
return n
return fibonacci(n-1) + fibonacci(n-2)
You'll find that this is very fast for even very high values:
>>> fibonacci(300)
222232244629420445529739893461909967206666939096499764990979600
but if you define the exact same function without the #lru_cache it gets very slow because it's not benefitting from the cache.
>>> fibonacci(300)
(very very long wait)
You're close. The point of "a memo" is to save calls, but you're making recursive calls regardless of whether the result for an argument has already been memorized. So you're not actually saving the work of calling. Simplest is to define the cache outside the function, and simply return at once if the argument is in the cache:
fib_cache = {0 : 0, 1 : 1}
def fib(n):
if n in fib_cache:
return fib_cache[n]
fib_cache[n] = result = fib(n-1) + fib(n-2)
return result
Then the cache will persist across top-level calls too.
But now there's another problem ;-) If the argument is large enough (say, 30000), you're likely to get a RecursionError (too many levels of recursive calls). That's not due to using a cache, it's just inherent in very deep recursion.
You can worm around that too, by exploiting the cache to call smaller arguments first, working your way up to the actual argument. For example, insert this after the if block:
for i in range(100, n, 100):
fib(i)
This ensures that recursion never has to go more than 100 levels deep to find an argument already memorized in the cache. I thought I'd mention that because hardly anyone ever does when answering a "memo" question. But memos are in fact a way not just to greatly speed some kinds of recursive algorithms, but also to apply them to some problems that "recurse too deep" without a memo constructed to limit the max call depth.
I just started Python and I've got no idea what memoization is and how to use it. Also, may I have a simplified example?
Memoization effectively refers to remembering ("memoization" → "memorandum" → to be remembered) results of method calls based on the method inputs and then returning the remembered result rather than computing the result again. You can think of it as a cache for method results. For further details, see page 387 for the definition in Introduction To Algorithms (3e), Cormen et al.
A simple example for computing factorials using memoization in Python would be something like this:
factorial_memo = {}
def factorial(k):
if k < 2: return 1
if k not in factorial_memo:
factorial_memo[k] = k * factorial(k-1)
return factorial_memo[k]
You can get more complicated and encapsulate the memoization process into a class:
class Memoize:
def __init__(self, f):
self.f = f
self.memo = {}
def __call__(self, *args):
if not args in self.memo:
self.memo[args] = self.f(*args)
#Warning: You may wish to do a deepcopy here if returning objects
return self.memo[args]
Then:
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
factorial = Memoize(factorial)
A feature known as "decorators" was added in Python 2.4 which allow you to now simply write the following to accomplish the same thing:
#Memoize
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
The Python Decorator Library has a similar decorator called memoized that is slightly more robust than the Memoize class shown here.
functools.cache decorator:
Python 3.9 released a new function functools.cache. It caches in memory the result of a functional called with a particular set of arguments, which is memoization. It's easy to use:
import functools
import time
#functools.cache
def calculate_double(num):
time.sleep(1) # sleep for 1 second to simulate a slow calculation
return num * 2
The first time you call caculate_double(5), it will take a second and return 10. The second time you call the function with the same argument calculate_double(5), it will return 10 instantly.
Adding the cache decorator ensures that if the function has been called recently for a particular value, it will not recompute that value, but use a cached previous result. In this case, it leads to a tremendous speed improvement, while the code is not cluttered with the details of caching.
(Edit: the previous example calculated a fibonacci number using recursion, but I changed the example to prevent confusion, hence the old comments.)
functools.lru_cache decorator:
If you need to support older versions of Python, functools.lru_cache works in Python 3.2+. By default, it only caches the 128 most recently used calls, but you can set the maxsize to None to indicate that the cache should never expire:
#functools.lru_cache(maxsize=None)
def calculate_double(num):
# etc
The other answers cover what it is quite well. I'm not repeating that. Just some points that might be useful to you.
Usually, memoisation is an operation you can apply on any function that computes something (expensive) and returns a value. Because of this, it's often implemented as a decorator. The implementation is straightforward and it would be something like this
memoised_function = memoise(actual_function)
or expressed as a decorator
#memoise
def actual_function(arg1, arg2):
#body
I've found this extremely useful
from functools import wraps
def memoize(function):
memo = {}
#wraps(function)
def wrapper(*args):
# add the new key to dict if it doesn't exist already
if args not in memo:
memo[args] = function(*args)
return memo[args]
return wrapper
#memoize
def fibonacci(n):
if n < 2: return n
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(25)
Memoization is keeping the results of expensive calculations and returning the cached result rather than continuously recalculating it.
Here's an example:
def doSomeExpensiveCalculation(self, input):
if input not in self.cache:
<do expensive calculation>
self.cache[input] = result
return self.cache[input]
A more complete description can be found in the wikipedia entry on memoization.
Let's not forget the built-in hasattr function, for those who want to hand-craft. That way you can keep the mem cache inside the function definition (as opposed to a global).
def fact(n):
if not hasattr(fact, 'mem'):
fact.mem = {1: 1}
if not n in fact.mem:
fact.mem[n] = n * fact(n - 1)
return fact.mem[n]
Memoization is basically saving the results of past operations done with recursive algorithms in order to reduce the need to traverse the recursion tree if the same calculation is required at a later stage.
see http://scriptbucket.wordpress.com/2012/12/11/introduction-to-memoization/
Fibonacci Memoization example in Python:
fibcache = {}
def fib(num):
if num in fibcache:
return fibcache[num]
else:
fibcache[num] = num if num < 2 else fib(num-1) + fib(num-2)
return fibcache[num]
Memoization is the conversion of functions into data structures. Usually one wants the conversion to occur incrementally and lazily (on demand of a given domain element--or "key"). In lazy functional languages, this lazy conversion can happen automatically, and thus memoization can be implemented without (explicit) side-effects.
Well I should answer the first part first: what's memoization?
It's just a method to trade memory for time. Think of Multiplication Table.
Using mutable object as default value in Python is usually considered bad. But if use it wisely, it can actually be useful to implement a memoization.
Here's an example adapted from http://docs.python.org/2/faq/design.html#why-are-default-values-shared-between-objects
Using a mutable dict in the function definition, the intermediate computed results can be cached (e.g. when calculating factorial(10) after calculate factorial(9), we can reuse all the intermediate results)
def factorial(n, _cache={1:1}):
try:
return _cache[n]
except IndexError:
_cache[n] = factorial(n-1)*n
return _cache[n]
Here is a solution that will work with list or dict type arguments without whining:
def memoize(fn):
"""returns a memoized version of any function that can be called
with the same list of arguments.
Usage: foo = memoize(foo)"""
def handle_item(x):
if isinstance(x, dict):
return make_tuple(sorted(x.items()))
elif hasattr(x, '__iter__'):
return make_tuple(x)
else:
return x
def make_tuple(L):
return tuple(handle_item(x) for x in L)
def foo(*args, **kwargs):
items_cache = make_tuple(sorted(kwargs.items()))
args_cache = make_tuple(args)
if (args_cache, items_cache) not in foo.past_calls:
foo.past_calls[(args_cache, items_cache)] = fn(*args,**kwargs)
return foo.past_calls[(args_cache, items_cache)]
foo.past_calls = {}
foo.__name__ = 'memoized_' + fn.__name__
return foo
Note that this approach can be naturally extended to any object by implementing your own hash function as a special case in handle_item. For example, to make this approach work for a function that takes a set as an input argument, you could add to handle_item:
if is_instance(x, set):
return make_tuple(sorted(list(x)))
Solution that works with both positional and keyword arguments independently of order in which keyword args were passed (using inspect.getargspec):
import inspect
import functools
def memoize(fn):
cache = fn.cache = {}
#functools.wraps(fn)
def memoizer(*args, **kwargs):
kwargs.update(dict(zip(inspect.getargspec(fn).args, args)))
key = tuple(kwargs.get(k, None) for k in inspect.getargspec(fn).args)
if key not in cache:
cache[key] = fn(**kwargs)
return cache[key]
return memoizer
Similar question: Identifying equivalent varargs function calls for memoization in Python
Just wanted to add to the answers already provided, the Python decorator library has some simple yet useful implementations that can also memoize "unhashable types", unlike functools.lru_cache.
cache = {}
def fib(n):
if n <= 1:
return n
else:
if n not in cache:
cache[n] = fib(n-1) + fib(n-2)
return cache[n]
If speed is a consideration:
#functools.cache and #functools.lru_cache(maxsize=None) are equally fast, taking 0.122 seconds (best of 15 runs) to loop a million times on my system
a global cache variable is quite a lot slower, taking 0.180 seconds (best of 15 runs) to loop a million times on my system
a self.cache class variable is a bit slower still, taking 0.214 seconds (best of 15 runs) to loop a million times on my system
The latter two are implemented similar to how it is described in the currently top-voted answer.
This is without memory exhaustion prevention, i.e. I did not add code in the class or global methods to limit that cache's size, this is really the barebones implementation. The lru_cache method has that for free, if you need this.
One open question for me would be how to unit test something that has a functools decorator. Is it possible to empty the cache somehow? Unit tests seem like they would be cleanest using the class method (where you can instantiate a new class for each test) or, secondarily, the global variable method (since you can do yourimportedmodule.cachevariable = {} to empty it).
I just started Python and I've got no idea what memoization is and how to use it. Also, may I have a simplified example?
Memoization effectively refers to remembering ("memoization" → "memorandum" → to be remembered) results of method calls based on the method inputs and then returning the remembered result rather than computing the result again. You can think of it as a cache for method results. For further details, see page 387 for the definition in Introduction To Algorithms (3e), Cormen et al.
A simple example for computing factorials using memoization in Python would be something like this:
factorial_memo = {}
def factorial(k):
if k < 2: return 1
if k not in factorial_memo:
factorial_memo[k] = k * factorial(k-1)
return factorial_memo[k]
You can get more complicated and encapsulate the memoization process into a class:
class Memoize:
def __init__(self, f):
self.f = f
self.memo = {}
def __call__(self, *args):
if not args in self.memo:
self.memo[args] = self.f(*args)
#Warning: You may wish to do a deepcopy here if returning objects
return self.memo[args]
Then:
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
factorial = Memoize(factorial)
A feature known as "decorators" was added in Python 2.4 which allow you to now simply write the following to accomplish the same thing:
#Memoize
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
The Python Decorator Library has a similar decorator called memoized that is slightly more robust than the Memoize class shown here.
functools.cache decorator:
Python 3.9 released a new function functools.cache. It caches in memory the result of a functional called with a particular set of arguments, which is memoization. It's easy to use:
import functools
import time
#functools.cache
def calculate_double(num):
time.sleep(1) # sleep for 1 second to simulate a slow calculation
return num * 2
The first time you call caculate_double(5), it will take a second and return 10. The second time you call the function with the same argument calculate_double(5), it will return 10 instantly.
Adding the cache decorator ensures that if the function has been called recently for a particular value, it will not recompute that value, but use a cached previous result. In this case, it leads to a tremendous speed improvement, while the code is not cluttered with the details of caching.
(Edit: the previous example calculated a fibonacci number using recursion, but I changed the example to prevent confusion, hence the old comments.)
functools.lru_cache decorator:
If you need to support older versions of Python, functools.lru_cache works in Python 3.2+. By default, it only caches the 128 most recently used calls, but you can set the maxsize to None to indicate that the cache should never expire:
#functools.lru_cache(maxsize=None)
def calculate_double(num):
# etc
The other answers cover what it is quite well. I'm not repeating that. Just some points that might be useful to you.
Usually, memoisation is an operation you can apply on any function that computes something (expensive) and returns a value. Because of this, it's often implemented as a decorator. The implementation is straightforward and it would be something like this
memoised_function = memoise(actual_function)
or expressed as a decorator
#memoise
def actual_function(arg1, arg2):
#body
I've found this extremely useful
from functools import wraps
def memoize(function):
memo = {}
#wraps(function)
def wrapper(*args):
# add the new key to dict if it doesn't exist already
if args not in memo:
memo[args] = function(*args)
return memo[args]
return wrapper
#memoize
def fibonacci(n):
if n < 2: return n
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(25)
Memoization is keeping the results of expensive calculations and returning the cached result rather than continuously recalculating it.
Here's an example:
def doSomeExpensiveCalculation(self, input):
if input not in self.cache:
<do expensive calculation>
self.cache[input] = result
return self.cache[input]
A more complete description can be found in the wikipedia entry on memoization.
Let's not forget the built-in hasattr function, for those who want to hand-craft. That way you can keep the mem cache inside the function definition (as opposed to a global).
def fact(n):
if not hasattr(fact, 'mem'):
fact.mem = {1: 1}
if not n in fact.mem:
fact.mem[n] = n * fact(n - 1)
return fact.mem[n]
Memoization is basically saving the results of past operations done with recursive algorithms in order to reduce the need to traverse the recursion tree if the same calculation is required at a later stage.
see http://scriptbucket.wordpress.com/2012/12/11/introduction-to-memoization/
Fibonacci Memoization example in Python:
fibcache = {}
def fib(num):
if num in fibcache:
return fibcache[num]
else:
fibcache[num] = num if num < 2 else fib(num-1) + fib(num-2)
return fibcache[num]
Memoization is the conversion of functions into data structures. Usually one wants the conversion to occur incrementally and lazily (on demand of a given domain element--or "key"). In lazy functional languages, this lazy conversion can happen automatically, and thus memoization can be implemented without (explicit) side-effects.
Well I should answer the first part first: what's memoization?
It's just a method to trade memory for time. Think of Multiplication Table.
Using mutable object as default value in Python is usually considered bad. But if use it wisely, it can actually be useful to implement a memoization.
Here's an example adapted from http://docs.python.org/2/faq/design.html#why-are-default-values-shared-between-objects
Using a mutable dict in the function definition, the intermediate computed results can be cached (e.g. when calculating factorial(10) after calculate factorial(9), we can reuse all the intermediate results)
def factorial(n, _cache={1:1}):
try:
return _cache[n]
except IndexError:
_cache[n] = factorial(n-1)*n
return _cache[n]
Here is a solution that will work with list or dict type arguments without whining:
def memoize(fn):
"""returns a memoized version of any function that can be called
with the same list of arguments.
Usage: foo = memoize(foo)"""
def handle_item(x):
if isinstance(x, dict):
return make_tuple(sorted(x.items()))
elif hasattr(x, '__iter__'):
return make_tuple(x)
else:
return x
def make_tuple(L):
return tuple(handle_item(x) for x in L)
def foo(*args, **kwargs):
items_cache = make_tuple(sorted(kwargs.items()))
args_cache = make_tuple(args)
if (args_cache, items_cache) not in foo.past_calls:
foo.past_calls[(args_cache, items_cache)] = fn(*args,**kwargs)
return foo.past_calls[(args_cache, items_cache)]
foo.past_calls = {}
foo.__name__ = 'memoized_' + fn.__name__
return foo
Note that this approach can be naturally extended to any object by implementing your own hash function as a special case in handle_item. For example, to make this approach work for a function that takes a set as an input argument, you could add to handle_item:
if is_instance(x, set):
return make_tuple(sorted(list(x)))
Solution that works with both positional and keyword arguments independently of order in which keyword args were passed (using inspect.getargspec):
import inspect
import functools
def memoize(fn):
cache = fn.cache = {}
#functools.wraps(fn)
def memoizer(*args, **kwargs):
kwargs.update(dict(zip(inspect.getargspec(fn).args, args)))
key = tuple(kwargs.get(k, None) for k in inspect.getargspec(fn).args)
if key not in cache:
cache[key] = fn(**kwargs)
return cache[key]
return memoizer
Similar question: Identifying equivalent varargs function calls for memoization in Python
Just wanted to add to the answers already provided, the Python decorator library has some simple yet useful implementations that can also memoize "unhashable types", unlike functools.lru_cache.
cache = {}
def fib(n):
if n <= 1:
return n
else:
if n not in cache:
cache[n] = fib(n-1) + fib(n-2)
return cache[n]
If speed is a consideration:
#functools.cache and #functools.lru_cache(maxsize=None) are equally fast, taking 0.122 seconds (best of 15 runs) to loop a million times on my system
a global cache variable is quite a lot slower, taking 0.180 seconds (best of 15 runs) to loop a million times on my system
a self.cache class variable is a bit slower still, taking 0.214 seconds (best of 15 runs) to loop a million times on my system
The latter two are implemented similar to how it is described in the currently top-voted answer.
This is without memory exhaustion prevention, i.e. I did not add code in the class or global methods to limit that cache's size, this is really the barebones implementation. The lru_cache method has that for free, if you need this.
One open question for me would be how to unit test something that has a functools decorator. Is it possible to empty the cache somehow? Unit tests seem like they would be cleanest using the class method (where you can instantiate a new class for each test) or, secondarily, the global variable method (since you can do yourimportedmodule.cachevariable = {} to empty it).
While I understand that tail recursion optimization is non-Pythonic, I came up with a quick hack to a question on here that was deleted as soon as a I was ready to post.
With a 1000 stack limit, deep recursion algorithms are not usable in Python. But sometimes it is great for initial thoughts through a solution. Since functions are first class in Python, I played with returning a valid function and the next value. Then call the process in a loop until done with single calls. I'm sure this isn't new.
What I found interesting is that I expected the extra overhead of the passing the function back and forth to make this slower than normal recursion. During my crude testing I found it to take 30-50% the time of normal recursion. (With an added bonus of allowing LONG recursions.)
Here is the code I'm running:
from contextlib import contextmanager
import time
# Timing code from StackOverflow most likely.
#contextmanager
def time_block(label):
start = time.clock()
try:
yield
finally:
end = time.clock()
print ('{} : {}'.format(label, end - start))
# Purely Recursive Function
def find_zero(num):
if num == 0:
return num
return find_zero(num - 1)
# Function that returns tuple of [method], [call value]
def find_zero_tail(num):
if num == 0:
return None, num
return find_zero_tail, num - 1
# Iterative recurser
def tail_optimize(method, val):
while method:
method, val = method(val)
return val
with time_block('Pure recursion: 998'):
find_zero(998)
with time_block('Tail Optimize Hack: 998'):
tail_optimize(find_zero_tail, 998)
with time_block('Tail Optimize Hack: 1000000'):
tail_optimize(find_zero_tail, 10000000)
# One Run Result:
# Pure recursion: 998 : 0.000372791020758
# Tail Optimize Hack: 998 : 0.000163852100569
# Tail Optimize Hack: 1000000 : 1.51006975627
Why is the second style faster?
My guess is the overhead with creating entries on the stack, but I'm not sure how to find out.
Edit:
In playing with call counts, I made a loop to try both at various num values. Recursive was much closer to parity when I was looping and calling multiple times.
So, I adding this before the timing, which is find_zero under a new name:
def unrelated_recursion(num):
if num == 0:
return num
return unrelated_recursion(num - 1)
unrelated_recursion(998)
Now the tail optimized call is 85% of the time of the full recursion.
So my theory is that 15% penalty is the overhead for the larger stack, versus single stack.
The reason I saw such a huge disparity in execution time when only running each once was the penalty for allocation of the stack memory and structure. Once that is allocated, the cost of using them is drastically lowered.
Because my algorithm is dead simple, the memory structure allocation is a large portion of the execution time.
When I cut my stack priming call to unrelated_recursion(499), I get about half way between fully primed and not primed stack in find_zero(998) execution time. This makes sense with the theory.
As a comment hopefully remineded me, I was not really answering the question, so here is my sentiment:
In your optimization, you're allocating, unpacking and deallocating tuples, so I tried without them:
# Function that returns tuple of [method], [call value]
def find_zero_tail(num):
if num == 0:
return None
return num - 1
# Iterative recurser
def tail_optimize(method, val):
while val:
val = method(val)
return val
for 1000 tries, each starting with value = 998:
this version take 0.16s
your "optimized" version took 0.22s
the "unoptimized" one took 0.29s
(Note that for me, your optimized version is faster that the un-optimized one ... but we don't do the exact same test.)
But I don't think this is usefull to get those stats: cost is more on the side of Python (methods calls, tuples allocations, ...) that your code doing real things. In a real application you'll not end up measuring the cost of 1000 tuples, but the cost of your actual implementation.
But simply don't do this: this is just hard to read for almost nothing, you're writing for the reader, not for the machine:
# Function that returns tuple of [method], [call value]
def find_zero_tail(num):
if num == 0:
return None, num
return find_zero_tail, num - 1
# Iterative recurser
def tail_optimize(method, val):
while method:
method, val = method(val)
return val
I won't try to implement a more readable version of it because I'll probably end up with:
def find_zero(val):
return 0
But I think in real cases there's some nice ways to deal with recursion limits (both on memory size or depth side):
To help about memory (not depth), an lru_cache from functools may typically help a lot:
>>> from functools import lru_cache
>>> #lru_cache()
... def fib(x):
... return fib(x - 1) + fib(x - 2) if x > 2 else 1
...
>>> fib(100)
354224848179261915075
And for stack size, you may use a list or a deque, depending on your context and usage, instead of using the language stack. Depending on the exact implementation (when you're in fact storing simple sub-computation in your stack to re-use them) it's called dynamic programming:
>>> def fib(x):
... stack = [1, 1]
... while len(stack) < x:
... stack.append(stack[-1] + stack[-2])
... return stack[-1]
...
>>> fib(100)
354224848179261915075
But, and here comes the nice part of using your own structure instead of the call stack, you're not always needed to keep the whole stack to continue computations:
>>> def fib(x):
... stack = (1, 1)
... for _ in range(x - 2):
... stack = stack[1], stack[0] + stack[1]
... return stack[1]
...
>>> fib(100)
354224848179261915075
But to conclude with a nice touch of "know the problem before trying to implement it" (unreadable, hard to debug, hard to visually proove, it's bad code, but it's fun):
>>> def fib(n):
... return (4 << n*(3+n)) // ((4 << 2*n) - (2 << n) - 1) & ((2 << n) - 1)
...
>>>
>>> fib(99)
354224848179261915075
If you ask me, the best implementation is the more readable one (for the Fibonacci example, probably the one with an LRU cache but by changing the ... if ... else ... with a more readable if statement, for another example a deque may be more readable, and for other examples, dynamic programming may be better...
"You're writing for the human reading your code, not for the machine".
I just started Python and I've got no idea what memoization is and how to use it. Also, may I have a simplified example?
Memoization effectively refers to remembering ("memoization" → "memorandum" → to be remembered) results of method calls based on the method inputs and then returning the remembered result rather than computing the result again. You can think of it as a cache for method results. For further details, see page 387 for the definition in Introduction To Algorithms (3e), Cormen et al.
A simple example for computing factorials using memoization in Python would be something like this:
factorial_memo = {}
def factorial(k):
if k < 2: return 1
if k not in factorial_memo:
factorial_memo[k] = k * factorial(k-1)
return factorial_memo[k]
You can get more complicated and encapsulate the memoization process into a class:
class Memoize:
def __init__(self, f):
self.f = f
self.memo = {}
def __call__(self, *args):
if not args in self.memo:
self.memo[args] = self.f(*args)
#Warning: You may wish to do a deepcopy here if returning objects
return self.memo[args]
Then:
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
factorial = Memoize(factorial)
A feature known as "decorators" was added in Python 2.4 which allow you to now simply write the following to accomplish the same thing:
#Memoize
def factorial(k):
if k < 2: return 1
return k * factorial(k - 1)
The Python Decorator Library has a similar decorator called memoized that is slightly more robust than the Memoize class shown here.
functools.cache decorator:
Python 3.9 released a new function functools.cache. It caches in memory the result of a functional called with a particular set of arguments, which is memoization. It's easy to use:
import functools
import time
#functools.cache
def calculate_double(num):
time.sleep(1) # sleep for 1 second to simulate a slow calculation
return num * 2
The first time you call caculate_double(5), it will take a second and return 10. The second time you call the function with the same argument calculate_double(5), it will return 10 instantly.
Adding the cache decorator ensures that if the function has been called recently for a particular value, it will not recompute that value, but use a cached previous result. In this case, it leads to a tremendous speed improvement, while the code is not cluttered with the details of caching.
(Edit: the previous example calculated a fibonacci number using recursion, but I changed the example to prevent confusion, hence the old comments.)
functools.lru_cache decorator:
If you need to support older versions of Python, functools.lru_cache works in Python 3.2+. By default, it only caches the 128 most recently used calls, but you can set the maxsize to None to indicate that the cache should never expire:
#functools.lru_cache(maxsize=None)
def calculate_double(num):
# etc
The other answers cover what it is quite well. I'm not repeating that. Just some points that might be useful to you.
Usually, memoisation is an operation you can apply on any function that computes something (expensive) and returns a value. Because of this, it's often implemented as a decorator. The implementation is straightforward and it would be something like this
memoised_function = memoise(actual_function)
or expressed as a decorator
#memoise
def actual_function(arg1, arg2):
#body
I've found this extremely useful
from functools import wraps
def memoize(function):
memo = {}
#wraps(function)
def wrapper(*args):
# add the new key to dict if it doesn't exist already
if args not in memo:
memo[args] = function(*args)
return memo[args]
return wrapper
#memoize
def fibonacci(n):
if n < 2: return n
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(25)
Memoization is keeping the results of expensive calculations and returning the cached result rather than continuously recalculating it.
Here's an example:
def doSomeExpensiveCalculation(self, input):
if input not in self.cache:
<do expensive calculation>
self.cache[input] = result
return self.cache[input]
A more complete description can be found in the wikipedia entry on memoization.
Let's not forget the built-in hasattr function, for those who want to hand-craft. That way you can keep the mem cache inside the function definition (as opposed to a global).
def fact(n):
if not hasattr(fact, 'mem'):
fact.mem = {1: 1}
if not n in fact.mem:
fact.mem[n] = n * fact(n - 1)
return fact.mem[n]
Memoization is basically saving the results of past operations done with recursive algorithms in order to reduce the need to traverse the recursion tree if the same calculation is required at a later stage.
see http://scriptbucket.wordpress.com/2012/12/11/introduction-to-memoization/
Fibonacci Memoization example in Python:
fibcache = {}
def fib(num):
if num in fibcache:
return fibcache[num]
else:
fibcache[num] = num if num < 2 else fib(num-1) + fib(num-2)
return fibcache[num]
Memoization is the conversion of functions into data structures. Usually one wants the conversion to occur incrementally and lazily (on demand of a given domain element--or "key"). In lazy functional languages, this lazy conversion can happen automatically, and thus memoization can be implemented without (explicit) side-effects.
Well I should answer the first part first: what's memoization?
It's just a method to trade memory for time. Think of Multiplication Table.
Using mutable object as default value in Python is usually considered bad. But if use it wisely, it can actually be useful to implement a memoization.
Here's an example adapted from http://docs.python.org/2/faq/design.html#why-are-default-values-shared-between-objects
Using a mutable dict in the function definition, the intermediate computed results can be cached (e.g. when calculating factorial(10) after calculate factorial(9), we can reuse all the intermediate results)
def factorial(n, _cache={1:1}):
try:
return _cache[n]
except IndexError:
_cache[n] = factorial(n-1)*n
return _cache[n]
Here is a solution that will work with list or dict type arguments without whining:
def memoize(fn):
"""returns a memoized version of any function that can be called
with the same list of arguments.
Usage: foo = memoize(foo)"""
def handle_item(x):
if isinstance(x, dict):
return make_tuple(sorted(x.items()))
elif hasattr(x, '__iter__'):
return make_tuple(x)
else:
return x
def make_tuple(L):
return tuple(handle_item(x) for x in L)
def foo(*args, **kwargs):
items_cache = make_tuple(sorted(kwargs.items()))
args_cache = make_tuple(args)
if (args_cache, items_cache) not in foo.past_calls:
foo.past_calls[(args_cache, items_cache)] = fn(*args,**kwargs)
return foo.past_calls[(args_cache, items_cache)]
foo.past_calls = {}
foo.__name__ = 'memoized_' + fn.__name__
return foo
Note that this approach can be naturally extended to any object by implementing your own hash function as a special case in handle_item. For example, to make this approach work for a function that takes a set as an input argument, you could add to handle_item:
if is_instance(x, set):
return make_tuple(sorted(list(x)))
Solution that works with both positional and keyword arguments independently of order in which keyword args were passed (using inspect.getargspec):
import inspect
import functools
def memoize(fn):
cache = fn.cache = {}
#functools.wraps(fn)
def memoizer(*args, **kwargs):
kwargs.update(dict(zip(inspect.getargspec(fn).args, args)))
key = tuple(kwargs.get(k, None) for k in inspect.getargspec(fn).args)
if key not in cache:
cache[key] = fn(**kwargs)
return cache[key]
return memoizer
Similar question: Identifying equivalent varargs function calls for memoization in Python
Just wanted to add to the answers already provided, the Python decorator library has some simple yet useful implementations that can also memoize "unhashable types", unlike functools.lru_cache.
cache = {}
def fib(n):
if n <= 1:
return n
else:
if n not in cache:
cache[n] = fib(n-1) + fib(n-2)
return cache[n]
If speed is a consideration:
#functools.cache and #functools.lru_cache(maxsize=None) are equally fast, taking 0.122 seconds (best of 15 runs) to loop a million times on my system
a global cache variable is quite a lot slower, taking 0.180 seconds (best of 15 runs) to loop a million times on my system
a self.cache class variable is a bit slower still, taking 0.214 seconds (best of 15 runs) to loop a million times on my system
The latter two are implemented similar to how it is described in the currently top-voted answer.
This is without memory exhaustion prevention, i.e. I did not add code in the class or global methods to limit that cache's size, this is really the barebones implementation. The lru_cache method has that for free, if you need this.
One open question for me would be how to unit test something that has a functools decorator. Is it possible to empty the cache somehow? Unit tests seem like they would be cleanest using the class method (where you can instantiate a new class for each test) or, secondarily, the global variable method (since you can do yourimportedmodule.cachevariable = {} to empty it).