Using nested def for organizing code - python

I keep nesting def statements as a way of grouping/organizing code, but after doing some reading I think I'm abusing it..
Is it kosher to do something like this?
def generate_random_info():
def random_name():
return numpy.random.choice(['foo', 'bar'])
def random_value():
return numpy.random.rand()
return {'name':random_name(), 'value':random_value()}

There is nothing wrong with it per se. But you should consider one thing when you use structures like this: random_name and random_value are functions that keep being redefined whenever you call generate_random_info(). Now that might not be a problem for those particular functions, especially when you won’t call it too often but you should consider that this is overhead that can be avoided.
So you should probably move those function definitions outside of the generate_random_info function. Or, since those inner functions don’t do much themselves, and you just call them directly, just inline them:
def generate_random_info():
return {
'name': numpy.random.choice(['foo', 'bar']),
'value': numpy.random.rand()
}

Unless you are planning on reusing the same chunk of code repeatedly throughout a single function and that function only, I would avoid creating those functions just for the sake of doing it. I'm not an expert on how the code is working on the computational level, but I would think that creating a function is more intensive than simply using that line as you have it now, especially if you're only going to use that function once.

Related

Refactoring a huge function into a set of smaller functions

In terms of clean code, how should a function that has nested for loops, if-else statements, and while loops be refactored? What would be the ideal, clean structure for a big function like this? Is it acceptable to break a function like this up to nested functions?
def main():
try:
for
if
for
while
for
for
if
for
if
else
if
else
if
except:
if __name__ == "__main__":
main()
Only nest loops iniside loops if you really need, otherwise avoid nesting them (for algorithmic performance reasons).
Use the Omri answer advice to identify each step you are doing, give each step a clear name and extract the step into its own function (that you call to perform the step in the original function).
This is different from nesting functions, something done for different reasons.
You are just making calls to helper functions placed somewhere else (not nested in your function).
Do not surround everything in a try block, and avoid the catch all empty except:. Surround only around the specific (or the few statements) than can cause trouble, and be specific to list in the expect clause only the error or error category(ies) you are expecting there.
It is mainly opinion-based and depends on the code itself. A good rule of thumb is that each function needs to have one logical purpose.

Deferring function call in argument in python 3

I'm trying to add a cache of sorts to an expensive function using a fixed dictionary. Something like this:
def func(arg):
if arg in precomputed:
return precomputed[arg]
else:
return expensive_function(arg)
Now it would be a bit cleaner if I could do something like this using dict.get() default values:
def func(arg):
return precomputed.get(arg, expensive_function(arg))
The problem is, expensive_function() runs regardless of whether precomputed.get() succeeds, so we get all of the fat for none of the flavor.
Is there a way I can defer the call to expensive_function() here so it is only called if precomputed_get() fails?
If it's cleanliness you are looking for, I suggest using library rather than reinventing the wheel:
from functools import lru_cache
#lru_cache
def expensive_function(arg):
# do expensive thing
pass
Now all calls to expensive_function are memoised, and you can call it without dealing with cache yourself. (If you are worried about memory consumption, you can even limit the cache size.)
To answer the literal question, Python has no way of creating functions or macros that lazily evaluate their parameters, like Haskell or Scheme do. The only way to defer calculation in Python is to wrap it in a function or a generator. It would be less, not more, readable than your original code.
def func(arg):
result = precomputed[arg] if arg in precomputed else expensive_function(arg)
return result

Function calls in a sequence

I am writing a program that must solve a task and the task has many points, so I made one function for each point.
In the main function, I am calling the functions (which all return a value) in the following way:
result = funcD(funcC(funcB(funcA(parameter))))
Is this way of setting function calls right and optimal or there is a better way?
First, as everyone else said, your implementation is totally valid, and separate into multiple lines is good idea to improve readability.
However, if there are even more that 4 functions, I have a better way to make your code more simple.
def chain_func(parameter, *functions):
for func in functions:
parameter = func(parameter)
return parameter
This is based on python can pass function as a variable and call it in other function.
To use it, just simple chain_func(parameter, funcA, funcB, funcC, funcD)
There's nothing really wrong with that way. You could improve readability by instead calling them like this:
resultA = funcA(parameter)
resultB = funcB(resultA)
resultC = funcC(resultB)
resultD = funcD(resultC)
But that's really just a matter of personal preference and style.
If what they do and what they return is fixed, then also the dependency between them is fixed. So you have no other way then call them in this order. Otherwise there is no way of telling without knowing what do they do exactly.
Whether you pin a reference to the partial results:
result1 = funcA(parameter)
#...
result = funcD(result3)
or call them as you've presented in your question doesn't make a significant difference.

Passing a variable from one function to another function

I have a function with way to much going on in it so I've decided to split it up into smaller functions and call all my block functions inside a single function. --> e.g.
def main_function(self):
time_subtraction(self)
pay_calculation(self,todays_hours)
and -->
def time_subtraction(self):
todays_hours = datetime.combine(datetime(1,1,1,0,0,0), single_object2) - datetime.combine(datetime(1,1,1,0,0,0),single_object)
return todays_hours
So what im trying to accomplish here is to make todays_hours available to my main_function. I've read lots of documentation and other resources but apparently I'm still struggling with this aspect.
EDIT--
This is not a method of the class. Its just a file where i have a lot of functions coded and i import it where needed.
If you want to pass the return value of one function to another, you need to either nest the function calls:
pay_calculation(self, time_subtraction(self))
… or store the value so you can pass it:
hours = time_subtraction(self)
pay_calculation(self, hours)
As a side note, if these are methods in a class, you should be calling them as self.time_subtraction(), self.pay_calculation(hours), etc., not time_subtraction(self), etc. And if they aren't methods in a class, maybe they should be.
Often it makes sense for a function to take a Spam instance, and for a method of Spam to send self as the first argument, in which case this is all fine. But the fact that you've defined def time_subtraction(self): implies that's not what's going on here, and you're confused about methods vs. normal functions.

Self executing functions in python

I have used occasionally (lambda x:<code>)(<some input>) in python, to preserve my namespace's (within the global namespace or elsewhere) cleanliness. One issue with the lambda solution is that it is a very limiting construct in terms of what it may contain.
Note: This is a habit from javascript programming
Is this a recommended way of preserving namespace? If so, is there a better way to implement a self-executing function?
Regarding the second half of the question
is there a better way to implement a self-executing function?
The standard way (<function-expression>)() is not possible in Python, because there is no way to put a multi-line block into a bracket without breaking Python's fundamental syntax. Nonetheless, Python do recognize the need for using function definitions as expressions and provide decorators (PEP318) as an alternative. PEP318 has an extensive discussion on this issue, in case someone would like to read more.
With decorators, it would be like
evalfn = lambda f: f()
#evalfn
def _():
print('I execute immediately')
Although vastly different syntatically, we shall see that it really is the same: the function definition is anonimous and used as an expression.
Using decorator for self-excuting functions is a bit of overkill, compared to the let-call-del method shown below. However, it may worth a try if there are many self-execution functions, a self-executing function is getting too long, or you simply don't bother naming these self-executing functions.
def f():
print('I execute immediately')
f()
del f
For a function A that will be called only in a specific function B, you can define A in B, by which I think the namespace will not be polluted. e.g.,
Instead of :
def a_fn():
//do something
def b_fn():
//do something
def c_fn():
b_fn()
a_fn()
You can:
def c_fn():
def a_fn():
//do something
def b_fn():
//do something
b_fn()
a_fn()
Though I'm not sure if its the pythonic way, I usually do like this.
You don't do it. It's a good in JavaScript, but in Python, you haven either lightweight syntax nor a need for it. If you need a function scope, define a function and call it. But very often you don't need one. You may need to pull code apart into multiple function to make it more understandable, but then a name for it helps anyway, and it may be useful in more than one place.
Also, don't worry about adding some more names to a namespace. Python, unlike JavaScript, has proper namespaces, so a helper you define at module scope is not visible in other files by default (i.e. unless imported).

Categories

Resources