Suppose I have an arbitrary function f in Python, that takes parameters.
def f(x): return 2*x
Now suppose I want a function that takes a function and returns the same function, but flipped along the y-axis (if it were graphed).
The obvious way to do it is
def reverse_fn(f): return lambda x, funct=f: funct(-x)
However, stacking function-modifying functions like this ends up breaking max recursion depth after a while, since the result is just a function that called another function that calls more functions all the way down.
What is the best way to make function-modifying-functions in Python, that can be used over and over again without taking excessive call stack or nesting functions?
One approach is editing the bytecode of the function. This is a very advanced technique, and is also very fragile. So, don't use this for production code!
That said, there is a module out there which implements precisely the kind of editing you want. It's called bytecodehacks, first released on April 1, 2000 (yes, it was an April Fools' joke, but a completely functional one). A slightly later edition (from 2005) works fine on my install of Python 2.7.6; grab it from CVS and run setup.py as usual. (Don't use the April2000 version; it won't work on newer Pythons).
bytecodehacks basically implements a number of utility routines that make it possible to edit the bytecode of a section of code (a function, module, or even just a single block within a function). You can use it to implement macros, for example. For the purposes of modifying a function, the inline tool is probably the most useful.
Here's how you would implement reverse_fn using bytecodehacks:
from bytecodehacks.inline import inline
def reverse_fn(f):
def g(x):
# Note that we use a global name here, not `f`.
return _f(-x)
return inline(g, _f=f)
That's all! inline takes care of the dirty business of "inlining" the function f into the body of g. In effect, if f(x) was return 2*x, then the return from reverse_fn(f) would be a function equivalent to return 2*(-x) (which would not have any function calls in it).
Now, one limitation of bytecodehacks is that the variable renaming (in extend_and_rename in inline.py) is somewhat stupid. So, if you apply reverse_fn 1000 times in a row, you will get a huge slowdown as the local variable names will begin to explode in size. I'm not sure how to fix this, but if you do, it will substantially improve the performance for functions that are repeatedly inlined.
The default recursion limit of 1000 can be increased with sys.setrecursionlimit(), but even 1000 is extraordinarily deep recursion, and comes at a steep performance penalty if your wrappers tend to be this kind of trivial alteration you show in your example.
What you could do, if you're trying to build up complex functions procedurally from simple primitives, is to compose the compound functions as Python source text and pass them through eval() to get callable functions. This approach has the significant advantage that a function built up from 1000 primitives won't incur the cost of 1000 function calls and returns when executed.
Note that eval() should be used with caution; don't eval() untrusted sources.
eval() will be fairly expensive per function created, and without knowing a little more about what you're trying to do, it's hard to advise. You could also simply write a program that generates a big .py file full of the compound functions you want.
I don't think you can achieve this in any language that doesn't support Tail Call Optimization without using a trampoline. Another option is to extract the AST of the function under question and generate a "brand new" function that doesn't call the original function at all but implementing this is not trivial and requires good understanding of some of the more internal parts of Python.
A trampoline on the other hand is easy to implement but has the drawback that your functions cannot be simple Python functions anymore—every time they need to make a recursive call, they return that call as, say, a tuple in the form (some_fn, args, kwargs) (while normal return values would be wrapped in a 1-tuple), the trampoline would then make that call for you so that the stack doesn't grow.
def rec(fn, *args, **kwargs):
return (fn, args, kwargs)
def value(val):
return (val,)
def tailrec(fn, *args, **kwargs):
while True:
ret = fn(*args, **kwargs)
if ret is None:
return None
elif len(ret) == 1:
return ret[0]
else:
fn, args, kwargs = ret # no kwargs supported if using tuples
def greet_a_lot(n):
if n > 0:
print "hello: " + str(n)
return rec(greet_a_lot, n - 1)
else:
return value("done")
print tailrec(greet_a_lot, 10000)
Output:
hello: 100000
hello: 99999
...
hello: 3
hello: 2
hello: 1
done
Related
I have the following piece of code which fails with the following error:
RuntimeError: maximum recursion depth exceeded
I attempted to rewrite this to allow for tail recursion optimization (TCO). I believe that this code should have been successful if a TCO had taken place.
def trisum(n, csum):
if n == 0:
return csum
else:
return trisum(n - 1, csum + n)
print(trisum(1000, 0))
Should I conclude that Python does not do any type of TCO, or do I just need to define it differently?
No, and it never will since Guido van Rossum prefers to be able to have proper tracebacks:
Tail Recursion Elimination (2009-04-22)
Final Words on Tail Calls (2009-04-27)
You can manually eliminate the recursion with a transformation like this:
>>> def trisum(n, csum):
... while True: # Change recursion to a while loop
... if n == 0:
... return csum
... n, csum = n - 1, csum + n # Update parameters instead of tail recursion
>>> trisum(1000,0)
500500
I published a module performing tail-call optimization (handling both tail-recursion and continuation-passing style): https://github.com/baruchel/tco
Optimizing tail-recursion in Python
It has often been claimed that tail-recursion doesn't suit the Pythonic way of coding and that one shouldn't care about how to embed it in a loop. I don't want to argue with
this point of view; sometimes however I like trying or implementing new ideas
as tail-recursive functions rather than with loops for various reasons (focusing on the
idea rather than on the process, having twenty short functions on my screen in the same
time rather than only three "Pythonic" functions, working in an interactive session rather than editing my code, etc.).
Optimizing tail-recursion in Python is in fact quite easy. While it is said to be impossible
or very tricky, I think it can be achieved with elegant, short and general solutions; I even
think that most of these solutions don't use Python features otherwise than they should.
Clean lambda expressions working along with very standard loops lead to quick, efficient and
fully usable tools for implementing tail-recursion optimization.
As a personal convenience, I wrote a small module implementing such an optimization
by two different ways. I would like to discuss here about my two main functions.
The clean way: modifying the Y combinator
The Y combinator is well known; it allows to use lambda functions in a recursive
manner, but it doesn't allow by itself to embed recursive calls in a loop. Lambda
calculus alone can't do such a thing. A slight change in the Y combinator however
can protect the recursive call to be actually evaluated. Evaluation can thus be delayed.
Here is the famous expression for the Y combinator:
lambda f: (lambda x: x(x))(lambda y: f(lambda *args: y(y)(*args)))
With a very slight change, I could get:
lambda f: (lambda x: x(x))(lambda y: f(lambda *args: lambda: y(y)(*args)))
Instead of calling itself, the function f now returns a function performing the
very same call, but since it returns it, the evaluation can be done later from outside.
My code is:
def bet(func):
b = (lambda f: (lambda x: x(x))(lambda y:
f(lambda *args: lambda: y(y)(*args))))(func)
def wrapper(*args):
out = b(*args)
while callable(out):
out = out()
return out
return wrapper
The function can be used in the following way; here are two examples with tail-recursive
versions of factorial and Fibonacci:
>>> from recursion import *
>>> fac = bet( lambda f: lambda n, a: a if not n else f(n-1,a*n) )
>>> fac(5,1)
120
>>> fibo = bet( lambda f: lambda n,p,q: p if not n else f(n-1,q,p+q) )
>>> fibo(10,0,1)
55
Obviously recursion depth isn't an issue any longer:
>>> bet( lambda f: lambda n: 42 if not n else f(n-1) )(50000)
42
This is of course the single real purpose of the function.
Only one thing can't be done with this optimization: it can't be used with a
tail-recursive function evaluating to another function (this comes from the fact
that callable returned objects are all handled as further recursive calls with
no distinction). Since I usually don't need such a feature, I am very happy
with the code above. However, in order to provide a more general module, I thought
a little more in order to find some workaround for this issue (see next section).
Concerning the speed of this process (which isn't the real issue however), it happens
to be quite good; tail-recursive functions are even evaluated much quicker than with
the following code using simpler expressions:
def bet1(func):
def wrapper(*args):
out = func(lambda *x: lambda: x)(*args)
while callable(out):
out = func(lambda *x: lambda: x)(*out())
return out
return wrapper
I think that evaluating one expression, even complicated, is much quicker than
evaluating several simple expressions, which is the case in this second version.
I didn't keep this new function in my module, and I see no circumstances where it
could be used rather than the "official" one.
Continuation passing style with exceptions
Here is a more general function; it is able to handle all tail-recursive functions,
including those returning other functions. Recursive calls are recognized from
other return values by the use of exceptions. This solutions is slower than the
previous one; a quicker code could probably be written by using some special
values as "flags" being detected in the main loop, but I don't like the idea of
using special values or internal keywords. There is some funny interpretation
of using exceptions: if Python doesn't like tail-recursive calls, an exception
should be raised when a tail-recursive call does occur, and the Pythonic way will be
to catch the exception in order to find some clean solution, which is actually what
happens here...
class _RecursiveCall(Exception):
def __init__(self, *args):
self.args = args
def _recursiveCallback(*args):
raise _RecursiveCall(*args)
def bet0(func):
def wrapper(*args):
while True:
try:
return func(_recursiveCallback)(*args)
except _RecursiveCall as e:
args = e.args
return wrapper
Now all functions can be used. In the following example, f(n) is evaluated to the
identity function for any positive value of n:
>>> f = bet0( lambda f: lambda n: (lambda x: x) if not n else f(n-1) )
>>> f(5)(42)
42
Of course, it could be argued that exceptions are not intended to be used for intentionally
redirecting the interpreter (as a kind of goto statement or probably rather a kind of
continuation passing style), which I have to admit. But, again,
I find funny the idea of using try with a single line being a return statement: we try to return
something (normal behaviour) but we can't do it because of a recursive call occurring (exception).
Initial answer (2013-08-29).
I wrote a very small plugin for handling tail recursion. You may find it with my explanations there: https://groups.google.com/forum/?hl=fr#!topic/comp.lang.python/dIsnJ2BoBKs
It can embed a lambda function written with a tail recursion style in another function which will evaluate it as a loop.
The most interesting feature in this small function, in my humble opinion, is that the function doesn't rely on some dirty programming hack but on mere lambda calculus: the behaviour of the function is changed to another one when inserted in another lambda function which looks very like the Y combinator.
The word of Guido is at http://neopythonic.blogspot.co.uk/2009/04/tail-recursion-elimination.html
I recently posted an entry in my Python History blog on the origins of
Python's functional features. A side remark about not supporting tail
recursion elimination (TRE) immediately sparked several comments about
what a pity it is that Python doesn't do this, including links to
recent blog entries by others trying to "prove" that TRE can be added
to Python easily. So let me defend my position (which is that I don't
want TRE in the language). If you want a short answer, it's simply
unpythonic. Here's the long answer:
CPython does not and will probably never support tail call optimization based on Guido van Rossum's statements on the subject.
I've heard arguments that it makes debugging more difficult because of how it modifies the stack trace.
Try the experimental macropy TCO implementation for size.
Besides optimizing tail recursion, you can set the recursion depth manually by:
import sys
sys.setrecursionlimit(5500000)
print("recursion limit:%d " % (sys.getrecursionlimit()))
There is no built-in tail recursion optimization in Python. However, we can "rebuild" the function through the Abstract Syntax Tree( AST), eliminating the recursion there and replacing it with a loop. Guido was absolutely right, this approach has some limitations, so I can't recommend it for use.
However, I still wrote (rather as a training example) my own version of the optimizer, and you can even try how it works.
Install this package via pip:
pip install astrologic
Now you can run this sample code:
from astrologic import no_recursion
counter = 0
#no_recursion
def recursion():
global counter
counter += 1
if counter != 10000000:
return recursion()
return counter
print(recursion())
This solution is not stable, and you should never use it in production. You can read about some significant restrictions on the page in github (in Russian, sorry). However, this solution is quite "real", without interrupting the code and other similar tricks.
A tail call can never be optimized to a jump in Python. An optimization is a program transformation that preserves the program's meaning. Tail-call elimination doesn't preserve the meaning of Python programs.
One problem, often mentioned, is that tail-call elimination changes the call stack, and Python allows for runtime introspection of the stack. But there is another problem that is rarely mentioned. There is probably a lot of code like this in the wild:
def map_file(path):
f = open(path, 'rb')
return mmap.mmap(f.fileno())
The call to mmap.mmap is in tail position. If it were replaced by a jump, then the current stack frame would be discarded before control was passed to mmap. The current stack frame contains the only reference to the file object, so the file object could (and in CPython would) be freed before mmap is called, which would close the file descriptor, invalidating it before mmap sees it.
At best, the code would fail with an exception. At worst, the file descriptor could be reused in another thread, causing mmap to map the wrong file. So this "optimization" would be a potentially disastrous thing to unleash on the huge body of existing Python code.
The Python spec guarantees that such problems won't occur, so you can be sure that no conformant implementation will ever convert return f(args) into a jump—unless, perhaps, it has a sophisticated static analysis engine that can prove that discarding an object early will have no observable consequences in this case.
None of that would prevent Python from adding a syntax for explicit tail calls with jump semantics, such as
return from f(args)
That wouldn't break code that didn't use it, and it would probably be useful for autogenerated code and some algorithms. GvR is no longer BDFL, so it might happen, but I wouldn't hold my breath.
I'm trying to add a cache of sorts to an expensive function using a fixed dictionary. Something like this:
def func(arg):
if arg in precomputed:
return precomputed[arg]
else:
return expensive_function(arg)
Now it would be a bit cleaner if I could do something like this using dict.get() default values:
def func(arg):
return precomputed.get(arg, expensive_function(arg))
The problem is, expensive_function() runs regardless of whether precomputed.get() succeeds, so we get all of the fat for none of the flavor.
Is there a way I can defer the call to expensive_function() here so it is only called if precomputed_get() fails?
If it's cleanliness you are looking for, I suggest using library rather than reinventing the wheel:
from functools import lru_cache
#lru_cache
def expensive_function(arg):
# do expensive thing
pass
Now all calls to expensive_function are memoised, and you can call it without dealing with cache yourself. (If you are worried about memory consumption, you can even limit the cache size.)
To answer the literal question, Python has no way of creating functions or macros that lazily evaluate their parameters, like Haskell or Scheme do. The only way to defer calculation in Python is to wrap it in a function or a generator. It would be less, not more, readable than your original code.
def func(arg):
result = precomputed[arg] if arg in precomputed else expensive_function(arg)
return result
To implement prettified xml, I have written following code
def prettify_by_response(response, prettify_func):
root = ET.fromstring(response.content)
return prettify_func(root)
def prettify_by_str(xml_str, prettify_func):
root = ET.fromstring(xml_str)
return prettify_func(root)
def make_pretty_xml(root):
rough_string = ET.tostring(root, "utf-8")
reparsed = minidom.parseString(rough_string)
xml = reparsed.toprettyxml(indent="\t")
return xml
def prettify(response):
if isinstance(response, str) or isinstance(response, bytes):
return prettify_by_str(response, make_pretty_xml)
else:
return prettify_by_response(response, make_pretty_xml)
In prettify_by_response and prettify_by_str functions, I pass function make_pretty_xml as an argument
Instead of passing function as an argument, I can simply call that function.e.g
def prettify_by_str(xml_str, prettify_func):
root = ET.fromstring(xml_str)
return make_pretty_xml(root)
One of the advantage that passing function as an argument to these function over calling that function directly is, this function is not tightly couple to make_pretty_xml function.
What would be other advantages or Am I adding additional complexity?
This seem very open to biased answers I'll try to be impartial but I can't make any promise.
First, high order functions are functions that receive, and/or return functions. The advantages are questionable, I'll try to enumerate the usage of HoF and elucidate the goods and bads of each one
Callbacks
Callbacks came as a solution to blocking calls. I need B to happens after A so I call something that blocks on A and then calls B. This naturally leads to questions like, Hmm, my system wastes a lot of time waiting for things to happen. What if instead of waiting I can get what I need to be done passed as an argument. As anything new in technology that wasn't scaled yet seems a good idea until is scaled.
Callbacks are very common on the event system. If you every code in javascript you know what I'm talking about.
Algorithm abstraction
Some designs, mostly the behavioral ones can make use of HoF to choose some algorithm at runtime. You can have a high-level algorithm that receives functions that deal with low-level stuff. This lead to more abstraction code reuse and portable code. Here, portable means that you can write code to deal with new low levels without changing the high-level ones. This is not related to HoF but can make use of them for great help.
Attaching behavior to another function
The idea here is taking a function as an argument and returning a function that does exactly what the argument function does, plus, some attached behavior. And this is where (I think) HoF really shines.
Python decorators are a perfect example. They take a function as an argument and return another function. This function is attached to the same identifier of the first function
#foo
def bar(*args):
...
is the same of
def bar(*args):
...
bar = foo(bar)
Now, reflect on this code
from functools import lru_cache
#lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
fib is just a Fibonacci function. It calculates the Fibonacci number up to n. Now lru_cache attach a new behavior, of caching results for already previously calculated values. The logic inside fib function is not tainted by LRU cache logic. What a beautiful piece of abstraction we have here.
Applicative style programming or point-free programming
The idea here is to remove variables, or points and combining function applications to express algorithms. I'm sure there are lots of people better than me in this subject wandering SO.
As a side note, this is not a very common style in python.
for i in it:
func(i)
from functools import partial
mapped_it = map(func, it)
In the second example, we removed the i variable. This is common in the parsing world. As another side node, map function is lazy in python, so the second example doesn't have effect until if you iterate over mapped_it
Your case
In your case, you are returning the value of the callback call. In fact, you don't need the callback, you can simply line up the calls as you did, and for this case you don't need HoF.
I hope this helps, and that somebody can show better examples of applicative style :)
Regards
Say I have a function or method that does something repetitive, like checking a value, before performing every operation it does, like so:
def myfunc():
if mybool:
do_operation_1()
else:
return
if mybool:
do_operation_2()
else:
return
...
These checks get repetitive, and end up wasting a lot of time and keyboard springs, especially when they are needed very often.
If you have control over the operation functions, like, do_operation_N you can decorate the functions with something that checks the boolean.
But what if you don't have control over the individual do_operation_N operations? If, for each line in a function or method, I want the same check to be performed, is there some way to "insert" it without explicitly writing it in on each operation line? For example, is there some decorator magic by which I could do the following?
def magic_decorator(to_decorate):
def check(*args, **kwargs):
for call in to_decorate: #magic
if mybool:
to_decorate.do_call(call) #magic
else:
return #or break, raise an exception, etc
return check
#magic_decorator
def myfunc():
do_operation_1()
do_operation_2()
...
If there is a way to achieve this, I don't care if it uses decorators or not; I just want some way to say "for every line in function/method X, do Y first".
The "magic" example of a do_call method above is shorthand for what I'm after, but it would encounter serious problems with out-of-order execution of individual lines (for example, if a function's first line was a variable assignment, and its second was a use of that variable, executing them out of order would cause problems).
To be clear: the ability to externally control the line-by-line order of a function's execution is not what I'm trying to achieve: ideally, I'd just implement something that, in the natural execution order, would perform an operation each time myfunc does something. If "does something" ends up being limited to "calls a function or method" (excluding assignments, if checks, etc), that is fine.
Store your operations in a sequence, then use a loop:
ops = (do_operation_1, do_operation_2, do_operation_3)
for op in ops:
if mybool:
op()
else:
return
Essentially, you can extract the file and line number from the decorated function, go re-read the function, compile it to an AST, insert nodes in the AST, and then compile the AST and use that as the function.
This method can be used for very long functions, which is a problem if you are using the approach above.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Let's say that a function A is required only by function B, should A be defined inside B?
Simple example. Two methods, one called from another:
def method_a(arg):
some_data = method_b(arg)
def method_b(arg):
return some_data
In Python we can declare def inside another def. So, if method_b is required for and called only from method_a, should I declare method_b inside method_a? like this :
def method_a(arg):
def method_b(arg):
return some_data
some_data = method_b(arg)
Or should I avoid doing this?
>>> def sum(x, y):
... def do_it():
... return x + y
... return do_it
...
>>> a = sum(1, 3)
>>> a
<function do_it at 0xb772b304>
>>> a()
4
Is this what you were looking for? It's called a closure.
You don't really gain much by doing this, in fact it slows method_a down because it'll define and recompile the other function every time it's called. Given that, it would probably be better to just prefix the function name with underscore to indicate it's a private method -- i.e. _method_b.
I suppose you might want to do this if the nested function's definition varied each time for some reason, but that may indicate a flaw in your design. That said, there is a valid reason to do this to allow the nested function to use arguments that were passed to the outer function but not explicitly passed on to them, which sometimes occurs when writing function decorators, for example. It's what is being shown in the accepted answer although a decorator is not being defined or used.
Update:
Here's proof that nesting them is slower (using Python 3.6.1), although admittedly not by much in this trivial case:
setup = """
class Test(object):
def separate(self, arg):
some_data = self._method_b(arg)
def _method_b(self, arg):
return arg+1
def nested(self, arg):
def method_b2(self, arg):
return arg+1
some_data = method_b2(self, arg)
obj = Test()
"""
from timeit import Timer
print(min(Timer(stmt='obj.separate(42)', setup=setup).repeat())) # -> 0.24479823284461724
print(min(Timer(stmt='obj.nested(42)', setup=setup).repeat())) # -> 0.26553459700452575
Note I added some self arguments to your sample functions to make them more like real methods (although method_b2 still isn't technically a method of the Test class). Also the nested function is actually called in that version, unlike yours.
Generally, no, do not define functions inside functions.
Unless you have a really good reason. Which you don't.
Why not?
It prevents easy hooks for unit testing. You are unit testing, aren't you?
It doesn't actually obfuscate it completely anyway, it's safer to assume nothing in python ever is.
Use standard Python automagic code style guidelines to encapsulate methods instead.
You will be needlessly recreating a function object for the identical code every single time you run the outer function.
If your function is really that simple, you should be using a lambda expression instead.
What is a really good reason to define functions inside functions?
When what you actually want is a dingdang closure.
A function inside of a function is commonly used for closures.
(There is a lot of contention over what exactly makes a closure a closure.)
Here's an example using the built-in sum(). It defines start once and uses it from then on:
def sum_partial(start):
def sum_start(iterable):
return sum(iterable, start)
return sum_start
In use:
>>> sum_with_1 = sum_partial(1)
>>> sum_with_3 = sum_partial(3)
>>>
>>> sum_with_1
<function sum_start at 0x7f3726e70b90>
>>> sum_with_3
<function sum_start at 0x7f3726e70c08>
>>> sum_with_1((1,2,3))
7
>>> sum_with_3((1,2,3))
9
Built-in python closure
functools.partial is an example of a closure.
From the python docs, it's roughly equivalent to:
def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = keywords.copy()
newkeywords.update(fkeywords)
return func(*(args + fargs), **newkeywords)
newfunc.func = func
newfunc.args = args
newfunc.keywords = keywords
return newfunc
(Kudos to #user225312 below for the answer. I find this example easier to figure out, and hopefully will help answer #mango's comment.)
It's actually fine to declare one function inside another one. This is specially useful creating decorators.
However, as a rule of thumb, if the function is complex (more than 10 lines) it might be a better idea to declare it on the module level.
I found this question because I wanted to pose a question why there is a performance impact if one uses nested functions. I ran tests for the following functions using Python 3.2.5 on a Windows Notebook with a Quad Core 2.5 GHz Intel i5-2530M processor
def square0(x):
return x*x
def square1(x):
def dummy(y):
return y*y
return x*x
def square2(x):
def dummy1(y):
return y*y
def dummy2(y):
return y*y
return x*x
def square5(x):
def dummy1(y):
return y*y
def dummy2(y):
return y*y
def dummy3(y):
return y*y
def dummy4(y):
return y*y
def dummy5(y):
return y*y
return x*x
I measured the following 20 times, also for square1, square2, and square5:
s=0
for i in range(10**6):
s+=square0(i)
and got the following results
>>>
m = mean, s = standard deviation, m0 = mean of first testcase
[m-3s,m+3s] is a 0.997 confidence interval if normal distributed
square? m s m/m0 [m-3s ,m+3s ]
square0 0.387 0.01515 1.000 [0.342,0.433]
square1 0.460 0.01422 1.188 [0.417,0.503]
square2 0.552 0.01803 1.425 [0.498,0.606]
square5 0.766 0.01654 1.979 [0.717,0.816]
>>>
square0 has no nested function, square1 has one nested function, square2 has two nested functions and square5 has five nested functions. The nested functions are only declared but not called.
So if you have defined 5 nested funtions in a function that you don't call then the execution time of the function is twice of the function without a nested function. I think should be cautious when using nested functions.
The Python file for the whole test that generates this output can be found at ideone.
So in the end it is largely a question about how smart the python implementation is or is not, particularly in the case of the inner function not being a closure but simply an in function needed helper only.
In clean understandable design having functions only where they are needed and not exposed elsewhere is good design whether they be embedded in a module, a class as a method, or inside another function or method. When done well they really improve the clarity of the code.
And when the inner function is a closure that can also help with clarity quite a bit even if that function is not returned out of the containing function for use elsewhere.
So I would say generally do use them but be aware of the possible performance hit when you actually are concerned about performance and only remove them if you do actual profiling that shows they best be removed.
Do not do premature optimization of just using "inner functions BAD" throughout all python code you write. Please.
It's just a principle about exposure APIs.
Using python, It's a good idea to avoid exposure API in outer space(module or class), function is a good encapsulation place.
It could be a good idea. when you ensure
inner function is ONLY used by outer function.
insider function has a good name to explain its purpose because the code talks.
code cannot directly understand by your colleagues(or other code-reader).
Even though, Abuse this technique may cause problems and implies a design flaw.
Just from my exp, Maybe misunderstand your question.
It's perfectly OK doing it that way, but unless you need to use a closure or return the function I'd probably put in the module level. I imagine in the second code example you mean:
...
some_data = method_b() # not some_data = method_b
otherwise, some_data will be the function.
Having it at the module level will allow other functions to use method_b() and if you're using something like Sphinx (and autodoc) for documentation, it will allow you to document method_b as well.
You also may want to consider just putting the functionality in two methods in a class if you're doing something that can be representable by an object. This contains logic well too if that's all you're looking for.
You can use it to avoid defining global variables. This gives you an alternative for other designs. 3 designs presenting a solution to a problem.
A) Using functions without globals
def calculate_salary(employee, list_with_all_employees):
x = _calculate_tax(list_with_all_employees)
# some other calculations done to x
pass
y = # something
return y
def _calculate_tax(list_with_all_employees):
return 1.23456 # return something
B) Using functions with globals
_list_with_all_employees = None
def calculate_salary(employee, list_with_all_employees):
global _list_with_all_employees
_list_with_all_employees = list_with_all_employees
x = _calculate_tax()
# some other calculations done to x
pass
y = # something
return y
def _calculate_tax():
return 1.23456 # return something based on the _list_with_all_employees var
C) Using functions inside another function
def calculate_salary(employee, list_with_all_employees):
def _calculate_tax():
return 1.23456 # return something based on the list_with_a--Lemployees var
x = _calculate_tax()
# some other calculations done to x
pass
y = # something
return y
Solution C) allows to use variables in the scope of the outer function without having the need to declare them in the inner function. Might be useful in some situations.
Do something like:
def some_function():
return some_other_function()
def some_other_function():
return 42
if you were to run some_function() it would then run some_other_function() and returns 42.
EDIT: I originally stated that you shouldn't define a function inside of another but it was pointed out that it is practical to do this sometimes.
Function In function python
def Greater(a,b):
if a>b:
return a
return b
def Greater_new(a,b,c,d):
return Greater(Greater(a,b),Greater(c,d))
print("Greater Number is :-",Greater_new(212,33,11,999))