Deferring function call in argument in python 3 - python

I'm trying to add a cache of sorts to an expensive function using a fixed dictionary. Something like this:
def func(arg):
if arg in precomputed:
return precomputed[arg]
else:
return expensive_function(arg)
Now it would be a bit cleaner if I could do something like this using dict.get() default values:
def func(arg):
return precomputed.get(arg, expensive_function(arg))
The problem is, expensive_function() runs regardless of whether precomputed.get() succeeds, so we get all of the fat for none of the flavor.
Is there a way I can defer the call to expensive_function() here so it is only called if precomputed_get() fails?

If it's cleanliness you are looking for, I suggest using library rather than reinventing the wheel:
from functools import lru_cache
#lru_cache
def expensive_function(arg):
# do expensive thing
pass
Now all calls to expensive_function are memoised, and you can call it without dealing with cache yourself. (If you are worried about memory consumption, you can even limit the cache size.)
To answer the literal question, Python has no way of creating functions or macros that lazily evaluate their parameters, like Haskell or Scheme do. The only way to defer calculation in Python is to wrap it in a function or a generator. It would be less, not more, readable than your original code.

def func(arg):
result = precomputed[arg] if arg in precomputed else expensive_function(arg)
return result

Related

Do different types of recursion have different memory complexities? [duplicate]

I have the following piece of code which fails with the following error:
RuntimeError: maximum recursion depth exceeded
I attempted to rewrite this to allow for tail recursion optimization (TCO). I believe that this code should have been successful if a TCO had taken place.
def trisum(n, csum):
if n == 0:
return csum
else:
return trisum(n - 1, csum + n)
print(trisum(1000, 0))
Should I conclude that Python does not do any type of TCO, or do I just need to define it differently?
No, and it never will since Guido van Rossum prefers to be able to have proper tracebacks:
Tail Recursion Elimination (2009-04-22)
Final Words on Tail Calls (2009-04-27)
You can manually eliminate the recursion with a transformation like this:
>>> def trisum(n, csum):
... while True: # Change recursion to a while loop
... if n == 0:
... return csum
... n, csum = n - 1, csum + n # Update parameters instead of tail recursion
>>> trisum(1000,0)
500500
I published a module performing tail-call optimization (handling both tail-recursion and continuation-passing style): https://github.com/baruchel/tco
Optimizing tail-recursion in Python
It has often been claimed that tail-recursion doesn't suit the Pythonic way of coding and that one shouldn't care about how to embed it in a loop. I don't want to argue with
this point of view; sometimes however I like trying or implementing new ideas
as tail-recursive functions rather than with loops for various reasons (focusing on the
idea rather than on the process, having twenty short functions on my screen in the same
time rather than only three "Pythonic" functions, working in an interactive session rather than editing my code, etc.).
Optimizing tail-recursion in Python is in fact quite easy. While it is said to be impossible
or very tricky, I think it can be achieved with elegant, short and general solutions; I even
think that most of these solutions don't use Python features otherwise than they should.
Clean lambda expressions working along with very standard loops lead to quick, efficient and
fully usable tools for implementing tail-recursion optimization.
As a personal convenience, I wrote a small module implementing such an optimization
by two different ways. I would like to discuss here about my two main functions.
The clean way: modifying the Y combinator
The Y combinator is well known; it allows to use lambda functions in a recursive
manner, but it doesn't allow by itself to embed recursive calls in a loop. Lambda
calculus alone can't do such a thing. A slight change in the Y combinator however
can protect the recursive call to be actually evaluated. Evaluation can thus be delayed.
Here is the famous expression for the Y combinator:
lambda f: (lambda x: x(x))(lambda y: f(lambda *args: y(y)(*args)))
With a very slight change, I could get:
lambda f: (lambda x: x(x))(lambda y: f(lambda *args: lambda: y(y)(*args)))
Instead of calling itself, the function f now returns a function performing the
very same call, but since it returns it, the evaluation can be done later from outside.
My code is:
def bet(func):
b = (lambda f: (lambda x: x(x))(lambda y:
f(lambda *args: lambda: y(y)(*args))))(func)
def wrapper(*args):
out = b(*args)
while callable(out):
out = out()
return out
return wrapper
The function can be used in the following way; here are two examples with tail-recursive
versions of factorial and Fibonacci:
>>> from recursion import *
>>> fac = bet( lambda f: lambda n, a: a if not n else f(n-1,a*n) )
>>> fac(5,1)
120
>>> fibo = bet( lambda f: lambda n,p,q: p if not n else f(n-1,q,p+q) )
>>> fibo(10,0,1)
55
Obviously recursion depth isn't an issue any longer:
>>> bet( lambda f: lambda n: 42 if not n else f(n-1) )(50000)
42
This is of course the single real purpose of the function.
Only one thing can't be done with this optimization: it can't be used with a
tail-recursive function evaluating to another function (this comes from the fact
that callable returned objects are all handled as further recursive calls with
no distinction). Since I usually don't need such a feature, I am very happy
with the code above. However, in order to provide a more general module, I thought
a little more in order to find some workaround for this issue (see next section).
Concerning the speed of this process (which isn't the real issue however), it happens
to be quite good; tail-recursive functions are even evaluated much quicker than with
the following code using simpler expressions:
def bet1(func):
def wrapper(*args):
out = func(lambda *x: lambda: x)(*args)
while callable(out):
out = func(lambda *x: lambda: x)(*out())
return out
return wrapper
I think that evaluating one expression, even complicated, is much quicker than
evaluating several simple expressions, which is the case in this second version.
I didn't keep this new function in my module, and I see no circumstances where it
could be used rather than the "official" one.
Continuation passing style with exceptions
Here is a more general function; it is able to handle all tail-recursive functions,
including those returning other functions. Recursive calls are recognized from
other return values by the use of exceptions. This solutions is slower than the
previous one; a quicker code could probably be written by using some special
values as "flags" being detected in the main loop, but I don't like the idea of
using special values or internal keywords. There is some funny interpretation
of using exceptions: if Python doesn't like tail-recursive calls, an exception
should be raised when a tail-recursive call does occur, and the Pythonic way will be
to catch the exception in order to find some clean solution, which is actually what
happens here...
class _RecursiveCall(Exception):
def __init__(self, *args):
self.args = args
def _recursiveCallback(*args):
raise _RecursiveCall(*args)
def bet0(func):
def wrapper(*args):
while True:
try:
return func(_recursiveCallback)(*args)
except _RecursiveCall as e:
args = e.args
return wrapper
Now all functions can be used. In the following example, f(n) is evaluated to the
identity function for any positive value of n:
>>> f = bet0( lambda f: lambda n: (lambda x: x) if not n else f(n-1) )
>>> f(5)(42)
42
Of course, it could be argued that exceptions are not intended to be used for intentionally
redirecting the interpreter (as a kind of goto statement or probably rather a kind of
continuation passing style), which I have to admit. But, again,
I find funny the idea of using try with a single line being a return statement: we try to return
something (normal behaviour) but we can't do it because of a recursive call occurring (exception).
Initial answer (2013-08-29).
I wrote a very small plugin for handling tail recursion. You may find it with my explanations there: https://groups.google.com/forum/?hl=fr#!topic/comp.lang.python/dIsnJ2BoBKs
It can embed a lambda function written with a tail recursion style in another function which will evaluate it as a loop.
The most interesting feature in this small function, in my humble opinion, is that the function doesn't rely on some dirty programming hack but on mere lambda calculus: the behaviour of the function is changed to another one when inserted in another lambda function which looks very like the Y combinator.
The word of Guido is at http://neopythonic.blogspot.co.uk/2009/04/tail-recursion-elimination.html
I recently posted an entry in my Python History blog on the origins of
Python's functional features. A side remark about not supporting tail
recursion elimination (TRE) immediately sparked several comments about
what a pity it is that Python doesn't do this, including links to
recent blog entries by others trying to "prove" that TRE can be added
to Python easily. So let me defend my position (which is that I don't
want TRE in the language). If you want a short answer, it's simply
unpythonic. Here's the long answer:
CPython does not and will probably never support tail call optimization based on Guido van Rossum's statements on the subject.
I've heard arguments that it makes debugging more difficult because of how it modifies the stack trace.
Try the experimental macropy TCO implementation for size.
Besides optimizing tail recursion, you can set the recursion depth manually by:
import sys
sys.setrecursionlimit(5500000)
print("recursion limit:%d " % (sys.getrecursionlimit()))
There is no built-in tail recursion optimization in Python. However, we can "rebuild" the function through the Abstract Syntax Tree( AST), eliminating the recursion there and replacing it with a loop. Guido was absolutely right, this approach has some limitations, so I can't recommend it for use.
However, I still wrote (rather as a training example) my own version of the optimizer, and you can even try how it works.
Install this package via pip:
pip install astrologic
Now you can run this sample code:
from astrologic import no_recursion
counter = 0
#no_recursion
def recursion():
global counter
counter += 1
if counter != 10000000:
return recursion()
return counter
print(recursion())
This solution is not stable, and you should never use it in production. You can read about some significant restrictions on the page in github (in Russian, sorry). However, this solution is quite "real", without interrupting the code and other similar tricks.
A tail call can never be optimized to a jump in Python. An optimization is a program transformation that preserves the program's meaning. Tail-call elimination doesn't preserve the meaning of Python programs.
One problem, often mentioned, is that tail-call elimination changes the call stack, and Python allows for runtime introspection of the stack. But there is another problem that is rarely mentioned. There is probably a lot of code like this in the wild:
def map_file(path):
f = open(path, 'rb')
return mmap.mmap(f.fileno())
The call to mmap.mmap is in tail position. If it were replaced by a jump, then the current stack frame would be discarded before control was passed to mmap. The current stack frame contains the only reference to the file object, so the file object could (and in CPython would) be freed before mmap is called, which would close the file descriptor, invalidating it before mmap sees it.
At best, the code would fail with an exception. At worst, the file descriptor could be reused in another thread, causing mmap to map the wrong file. So this "optimization" would be a potentially disastrous thing to unleash on the huge body of existing Python code.
The Python spec guarantees that such problems won't occur, so you can be sure that no conformant implementation will ever convert return f(args) into a jump—unless, perhaps, it has a sophisticated static analysis engine that can prove that discarding an object early will have no observable consequences in this case.
None of that would prevent Python from adding a syntax for explicit tail calls with jump semantics, such as
return from f(args)
That wouldn't break code that didn't use it, and it would probably be useful for autogenerated code and some algorithms. GvR is no longer BDFL, so it might happen, but I wouldn't hold my breath.

Advantages of Higher order functions in Python

To implement prettified xml, I have written following code
def prettify_by_response(response, prettify_func):
root = ET.fromstring(response.content)
return prettify_func(root)
def prettify_by_str(xml_str, prettify_func):
root = ET.fromstring(xml_str)
return prettify_func(root)
def make_pretty_xml(root):
rough_string = ET.tostring(root, "utf-8")
reparsed = minidom.parseString(rough_string)
xml = reparsed.toprettyxml(indent="\t")
return xml
def prettify(response):
if isinstance(response, str) or isinstance(response, bytes):
return prettify_by_str(response, make_pretty_xml)
else:
return prettify_by_response(response, make_pretty_xml)
In prettify_by_response and prettify_by_str functions, I pass function make_pretty_xml as an argument
Instead of passing function as an argument, I can simply call that function.e.g
def prettify_by_str(xml_str, prettify_func):
root = ET.fromstring(xml_str)
return make_pretty_xml(root)
One of the advantage that passing function as an argument to these function over calling that function directly is, this function is not tightly couple to make_pretty_xml function.
What would be other advantages or Am I adding additional complexity?
This seem very open to biased answers I'll try to be impartial but I can't make any promise.
First, high order functions are functions that receive, and/or return functions. The advantages are questionable, I'll try to enumerate the usage of HoF and elucidate the goods and bads of each one
Callbacks
Callbacks came as a solution to blocking calls. I need B to happens after A so I call something that blocks on A and then calls B. This naturally leads to questions like, Hmm, my system wastes a lot of time waiting for things to happen. What if instead of waiting I can get what I need to be done passed as an argument. As anything new in technology that wasn't scaled yet seems a good idea until is scaled.
Callbacks are very common on the event system. If you every code in javascript you know what I'm talking about.
Algorithm abstraction
Some designs, mostly the behavioral ones can make use of HoF to choose some algorithm at runtime. You can have a high-level algorithm that receives functions that deal with low-level stuff. This lead to more abstraction code reuse and portable code. Here, portable means that you can write code to deal with new low levels without changing the high-level ones. This is not related to HoF but can make use of them for great help.
Attaching behavior to another function
The idea here is taking a function as an argument and returning a function that does exactly what the argument function does, plus, some attached behavior. And this is where (I think) HoF really shines.
Python decorators are a perfect example. They take a function as an argument and return another function. This function is attached to the same identifier of the first function
#foo
def bar(*args):
...
is the same of
def bar(*args):
...
bar = foo(bar)
Now, reflect on this code
from functools import lru_cache
#lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
fib is just a Fibonacci function. It calculates the Fibonacci number up to n. Now lru_cache attach a new behavior, of caching results for already previously calculated values. The logic inside fib function is not tainted by LRU cache logic. What a beautiful piece of abstraction we have here.
Applicative style programming or point-free programming
The idea here is to remove variables, or points and combining function applications to express algorithms. I'm sure there are lots of people better than me in this subject wandering SO.
As a side note, this is not a very common style in python.
for i in it:
func(i)
from functools import partial
mapped_it = map(func, it)
In the second example, we removed the i variable. This is common in the parsing world. As another side node, map function is lazy in python, so the second example doesn't have effect until if you iterate over mapped_it
Your case
In your case, you are returning the value of the callback call. In fact, you don't need the callback, you can simply line up the calls as you did, and for this case you don't need HoF.
I hope this helps, and that somebody can show better examples of applicative style :)
Regards

Using nested def for organizing code

I keep nesting def statements as a way of grouping/organizing code, but after doing some reading I think I'm abusing it..
Is it kosher to do something like this?
def generate_random_info():
def random_name():
return numpy.random.choice(['foo', 'bar'])
def random_value():
return numpy.random.rand()
return {'name':random_name(), 'value':random_value()}
There is nothing wrong with it per se. But you should consider one thing when you use structures like this: random_name and random_value are functions that keep being redefined whenever you call generate_random_info(). Now that might not be a problem for those particular functions, especially when you won’t call it too often but you should consider that this is overhead that can be avoided.
So you should probably move those function definitions outside of the generate_random_info function. Or, since those inner functions don’t do much themselves, and you just call them directly, just inline them:
def generate_random_info():
return {
'name': numpy.random.choice(['foo', 'bar']),
'value': numpy.random.rand()
}
Unless you are planning on reusing the same chunk of code repeatedly throughout a single function and that function only, I would avoid creating those functions just for the sake of doing it. I'm not an expert on how the code is working on the computational level, but I would think that creating a function is more intensive than simply using that line as you have it now, especially if you're only going to use that function once.

Modify a function in python without calling it

Suppose I have an arbitrary function f in Python, that takes parameters.
def f(x): return 2*x
Now suppose I want a function that takes a function and returns the same function, but flipped along the y-axis (if it were graphed).
The obvious way to do it is
def reverse_fn(f): return lambda x, funct=f: funct(-x)
However, stacking function-modifying functions like this ends up breaking max recursion depth after a while, since the result is just a function that called another function that calls more functions all the way down.
What is the best way to make function-modifying-functions in Python, that can be used over and over again without taking excessive call stack or nesting functions?
One approach is editing the bytecode of the function. This is a very advanced technique, and is also very fragile. So, don't use this for production code!
That said, there is a module out there which implements precisely the kind of editing you want. It's called bytecodehacks, first released on April 1, 2000 (yes, it was an April Fools' joke, but a completely functional one). A slightly later edition (from 2005) works fine on my install of Python 2.7.6; grab it from CVS and run setup.py as usual. (Don't use the April2000 version; it won't work on newer Pythons).
bytecodehacks basically implements a number of utility routines that make it possible to edit the bytecode of a section of code (a function, module, or even just a single block within a function). You can use it to implement macros, for example. For the purposes of modifying a function, the inline tool is probably the most useful.
Here's how you would implement reverse_fn using bytecodehacks:
from bytecodehacks.inline import inline
def reverse_fn(f):
def g(x):
# Note that we use a global name here, not `f`.
return _f(-x)
return inline(g, _f=f)
That's all! inline takes care of the dirty business of "inlining" the function f into the body of g. In effect, if f(x) was return 2*x, then the return from reverse_fn(f) would be a function equivalent to return 2*(-x) (which would not have any function calls in it).
Now, one limitation of bytecodehacks is that the variable renaming (in extend_and_rename in inline.py) is somewhat stupid. So, if you apply reverse_fn 1000 times in a row, you will get a huge slowdown as the local variable names will begin to explode in size. I'm not sure how to fix this, but if you do, it will substantially improve the performance for functions that are repeatedly inlined.
The default recursion limit of 1000 can be increased with sys.setrecursionlimit(), but even 1000 is extraordinarily deep recursion, and comes at a steep performance penalty if your wrappers tend to be this kind of trivial alteration you show in your example.
What you could do, if you're trying to build up complex functions procedurally from simple primitives, is to compose the compound functions as Python source text and pass them through eval() to get callable functions. This approach has the significant advantage that a function built up from 1000 primitives won't incur the cost of 1000 function calls and returns when executed.
Note that eval() should be used with caution; don't eval() untrusted sources.
eval() will be fairly expensive per function created, and without knowing a little more about what you're trying to do, it's hard to advise. You could also simply write a program that generates a big .py file full of the compound functions you want.
I don't think you can achieve this in any language that doesn't support Tail Call Optimization without using a trampoline. Another option is to extract the AST of the function under question and generate a "brand new" function that doesn't call the original function at all but implementing this is not trivial and requires good understanding of some of the more internal parts of Python.
A trampoline on the other hand is easy to implement but has the drawback that your functions cannot be simple Python functions anymore—every time they need to make a recursive call, they return that call as, say, a tuple in the form (some_fn, args, kwargs) (while normal return values would be wrapped in a 1-tuple), the trampoline would then make that call for you so that the stack doesn't grow.
def rec(fn, *args, **kwargs):
return (fn, args, kwargs)
def value(val):
return (val,)
def tailrec(fn, *args, **kwargs):
while True:
ret = fn(*args, **kwargs)
if ret is None:
return None
elif len(ret) == 1:
return ret[0]
else:
fn, args, kwargs = ret # no kwargs supported if using tuples
def greet_a_lot(n):
if n > 0:
print "hello: " + str(n)
return rec(greet_a_lot, n - 1)
else:
return value("done")
print tailrec(greet_a_lot, 10000)
Output:
hello: 100000
hello: 99999
...
hello: 3
hello: 2
hello: 1
done

Python method that is also a generator function?

I'm trying to build a method that also acts like a generator function, at a flip of a switch (want_gen below).
Something like:
def optimize(x, want_gen):
# ... declaration and validation code
for i in range(100):
# estimate foo, bar, baz
# ... some code here
x = calculate_next_x(x, foo, bar, baz)
if want_gen:
yield x
if not want_gen:
return x
But of course this doesn't work -- Python apparently doesn't allow yield and return in the same method, even though they cannot be executed simultaneously.
The code is quite involved, and refactoring the declaration and validation code doesn't make much sense (too many state variables -- I will end up with difficult-to-name helper routines of 7+ parameters, which is decidedly ugly). And of course, I'd like to avoid code duplication as much as possible.
Is there some code pattern that would make sense here to achieve the behaviour I want?
Why do I need that?
I have a rather complicated and time-consuming optimization routine, and I'd like to get feedback about its current state during runtime (to display in e.g. GUI). The old behaviour needs to be there for backwards compatibility. Multithreading and messaging is too much work for too little additional benefit, especially when cross-platform operation is necessary.
Edit:
Perhaps I should have mentioned that since each optimization step is rather lengthy (there are some numerical simulations involved as well), I'd like to be able to "step in" at a certain iteration and twiddle some parameters, or abort the whole business altogether. The generators seemed like a good idea, since I could launch another iteration at my discretion, fiddling in the meantime with some parameters.
Since all you seem to want is some sort of feedback for a long running function, why not just pass in a reference to a callback procedure that will be called at regular intervals?
An edit to my answer, why not just always yield? You can have a function which yields a single value. If you don't want that then just choose to have your function either return a generator itself or the value:
def stuff(x, want_gen):
if want_gen:
def my_gen(x):
#code with yield
return my_gen
else:
return x
That way you are always returning a value. In Python, functions are objects.
Well...we can always remember that yield was implemented in the language as a way to facilitate the existence of generator objects, but one can always implement them either from scratch, or getting the best of both worlds:
class Optimize(object):
def __init__(self, x):
self.x = x
def __iter__(self):
x = self.x
# ... declaration and validation code
for i in range(100):
# estimate foo, bar, baz
# ... some code here
x = calculate_next_x(x, foo, bar, baz)
yield x
def __call__(self):
gen = iter(self)
return gen.next()
def optimize(x, wantgen):
if wantgen:
return iter(Optimize(x))
else:
return Optimize(x)()
Not that you don't even need the "optimize" function wrapper - I just put it in there so it becomes a drop-in replacement for your example (would it work).
The way the class is declared, you can do simply:
for y in Optimize(x):
#code
to use it as a generator, or:
k = Optimize(x)()
to use it as a function.
Kind of messy, but I think this does the same as your original code was asking:
def optimize(x, want_gen):
def optimize_gen(x):
# ... declaration and validation code
for i in range(100):
# estimate foo, bar, baz
# ... some code here
x = calculate_next_x(x, foo, bar, baz)
if want_gen:
yield x
if want_gen:
return optimize_gen(x)
for x in optimize_gen(x):
pass
return x
Alternatively the for loop at the end could be written:
return list(optimize_gen(x))[-1]
Now ask yourself if you really want to do this. Why do you sometimes want the whole sequence and sometimes only want the last element? Smells a bit fishy to me.
It's not completely clear what you want to happen if you switch between generator and function modes.
But as a first try: perhaps wrap the generator version in a new method which explicitly throws away the intermediate steps?
def gen():
for i in range(100):
yield i
def wrap():
for x in gen():
pass
return x
print "wrap=", wrap()
With this version you could step into gen() by looping over smaller numbers of the range, make adjustments, and then use wrap() only when you want to finish up.
Simplest is to write two methods, one the generator and the other calling the generator and just returning the value. If you really want one function with both possibilities, you can always use the want_gen flag to test what sort of return value, returning the iterator produced by the generator function when True and just the value otherwise.
How about this pattern. Make your 3 line of changes to convert the function to a generator. Rename it to NewFunctionName. Replace the existing function with one that either returns the generator if want_gen is True, or exhausts the generator and returns the final value.

Categories

Resources