Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Most functions are easy to name. Generally, a function name is based on what it does or the type of result it produces.
In the case of a generator function, however, the result could be a iterable over some other type.
def sometype( iterable ):
for x in iterable:
yield some_transformation( x )
The sometype name feels misleading, since the function doesn't return an object of the named type. It's really an iterable over sometype.
A name like iter_sometype or gen_sometype feels a bit too much like Hungarian Notation. However, it also seems to clarify the intent of the function.
Going further, there are a number of more specialized cases, where a prefix might be helpful.
These are typical examples, some of which are available in itertools. However, we often have to write a version that's got some algorithmic complexity that makes it
similar to itertools, but not a perfect fit.
def reduce_sometype( iterable ):
summary = sometype()
for x in iterable:
if some_rule(x):
yield summary
summary= sometype()
summary.update( x )
def map_sometype( iterable ):
for x in iterable:
yield some_complex_mapping( x )
def filter_sometype( iterable ):
for x in iterable:
if some_complex_rule(x):
yield x
Does the iter_, map_, reduce_, filter_ prefix help clarify the name of a generator function? Or is it just visual clutter?
If a prefix is helpful, what prefix suggestions do you have?
Alternatively, if a suffix is helpful, what suffix suggestions do you have?
Python dicts have iter* methods. And lxml trees also have an iter method.
Reading
for node in doc.iter():
seems familiar, so
following that pattern, I'd consider naming the a generator of sometypes sometypes_iter
so that I could write analgously,
for item in sometypes_iter():
Python provides a sorted function.
Following that pattern, I might make the verb-functions past tense:
sometypes_reduced
sometypes_mapped
sometypes_filtered
If you have enough of these functions, it might make sense to make a SomeTypes class so the method names could be shortened to reduce, map, and filter.
If the functions can be generalized to accept or return types other than sometype, then of course it would make sense to remove sometype from the function name, and instead choose a name that emphasizes what it does rather than the types.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 months ago.
Improve this question
I've been programming for a couple of months now and now I actually have a question... So, if I am not completely wrong, there is just one return per call, right? Doesn't matter if its None or a certain return but there cant be like 2 return statements in one call? So lets move on to the magic methods. In which order do they get processed?
def __str__(self):
return f"{self.first} {self.second}"
def __repr__(self):
return "{} {}".format(self.first, self.second)
Always the last one? Or are there differences between certain magic methods in terms of ranking systems? Or do they even get both processed but just one becomes returned=?
There is no return order. Each magic method is a hook called by the Python implementation in order to implement specific protocols.
x.__str__ defines what str(x) means.
x.__repr__ defines what repr(x) means.
And that's it. Well, almost.
You also need to know when str or repr might be used aside from explicit calls. Some examples:
print calls str on each of its arguments to ensure that it has str values to write to the appropriate file.
The interactive interpreter calls repr on the value of each expression it evaluates.
In addition, object.__str__ falls back to use __repr__, I think by invoking x.__repr__() directly (rather than calling repr(x), which would then call x.__repr__()). So str(x) can indirectly be implemented using a __repr__ method if no class involved defined a __str__ method.
Other groups of magic methods might cooperate in order to define a more complicated protocol. For example,
x += y
could involve several options, tried in order:
x = x.__iadd__(y), if x.__iadd__ is defined
x = x.__add__(y), if x.__add__ is defined
x = y.__radd__(x), if x.__add__ is not defined or x.__add__(y) returned NonImplemented.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
In both examples, class.method() returns a list.
Example A:
if class.method():
for i in class.method():
# do stuff
Example B
list = class.method()
if list:
for i in list:
# do stuff
Which is better? It would seem to me that in some languages (but I don't know which), example A would result in class.method() being needlessly evaluated twice, and example B would be best practice. However, perhaps other languages (again not knowing which) might retain the output of a method in memory in case that method is called again, therefore avoiding having to do the same evaluation twice and resulting in little difference between examples A and B. Is this so? If so, can you give examples of a language for each case? And the real reason for the question: which is best practice in Python?
Unless your Python interpreter has JIT capabilities, the method will be evaluated every time you call it.
And even when the JIT compilation is possible, methods have to be proven by the compiler / interpreter that they do not have any side effects, that is they are deterministic.
For example, consider a method that pulls data from a database or a method that contains a call to a random number generator:
import random
def method():
return random.uniform(0.0, 1.0)
Output of such a method cannot be saved in memory because the second time you call it, it may change.
On the other hand, getter methods that accumulate data are a great example of a deterministic method, given that they do not call a non-deterministic method in their body.
from dataclasses import dataclass
#dataclass
class Example:
a : list
b : list
def method(self):
return self.a + self.b
In practice, you are better of to not assume anything from the compiler / interpreter and do these small, easy to do optimizations yourself. You also have to consider that your code can be run on multiple platforms, which further complicates things.
So I would recommend you to call the method only once and save its output in a temporary variable:
result = class.method()
if result :
for i in result:
# do stuff
And given that it's Python, I recommend to ask for forgiveness with the try keyword if most of the time you run the method, its output is not None:
result = class.method()
try:
for i in result:
# do stuff
except TypeError:
pass
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What is the pythonic way to tell the caller of a function what values a given parameter supports?
He is an example for PyQt (for GUI). Say I have a checkbox,
class checkbox(object):
....
def setCheckState(self, value):
....
Here, setCheckState() should only expect checked or unchecked.
PyQt uses a built-in enumeration (i.e. Qt.Checked or Qt.Unchecked), but this is awful. I am constantly in the documentation looking for the enum for the object I am working with.
Obviously PyQt is written in an unpythonic C++ sytle. How should this or a similar problem be handled in Python? According to PEP 435, enums seem to be a recent addition to the language and for very specific applications, so I would assume there is/was a better way to handle this?
I want to make the code I write easy to use when my functions require specific parameter values--almost like a combobox for functions.
The One Obvious Way is function annotations.
class CheckBox(enum.Enum):
Off = 0
On = 1
def setCheckState(self, value: CheckBox):
...
This says quite clearly that value should be an instance of CheckBox. Having Enum just makes that a bit easier.
Annotations themselves aren't directly supported in 2.7, though. Common workarounds include putting that information in the function doc string (where various tools can find it) or in comments (as we already knew).
If looking for a method for your own code: use an annotating decorator. This has the advantage of continuing to work in 3+:
class annotate(object):
def __init__(self, **kwds):
self.kwds = kwds
def __call__(self, func):
func.__annotations__ = self.kwds
#annotate(value=CheckBox)
def setCheckState(self, value):
...
To be a robust decorator it should check that the contents of kwds matches the function parameter names.
That will do the trick
import collections
def create_enum(container, start_num, *enum_words):
return collections.namedtuple(container, enum_words)(*range(start_num, start_num + len(enum_words)))
Switch = create_enum('enums', 1, 'On', 'Off')
Switch is your enum:
In [20]: Switch.On
Out[20]: 1
In [21]: Switch.Off
Out[21]: 2
OK, I got the error of my ways - I mixed up representation with value.
Nevertheless, if you want to enumerate a larger range - in my fake approach you don't have to add values manually. Of course, if you have sequential numbers.
And I hate extra typing :-)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
When I'm reading docs or examples, I often see the idea come up that you can assign an anonymous function to a variable. Why would I ever actually do this rather than just define a new function?
Some examples:
Clojure/Lisp
(def add2
(fn [a] (+ 2 a))
(add2 4) ;; => 6
Python
add2 = lambda e: e + 2
add2(3) # => 5
Scala
val add2 = (x: Int) => x + 2
add2(5) /* => 7 */
Obviously, these are trivial examples, but in production code, I usually think of an anonymous function being a one off function that I need for a specific use case (think higher kinded types and the like).
Can anyone explain why I would assign an anonymous function to a variable? Is it a runtime/compile time thing? Are there certain performance characteristics that make this favorable?
I think the way it is presented is more so the reader truly understands that functions are first class in said languages. Had they only used them as arguments to other functions, perhaps the point might be lost. But using them in a very value like way, as the right hand of an assignment, or calling a method on the lambda itself etc drives home the point that these are quite similar to numbers, strings, maps or any other value in the language.
Personally, I don't use this pattern because as other comments have mentioned, it makes code harder to read and debug, as well as in some me cases not having the full power of proper function declaration (Python).
However, when one is writing code which actually makes use of function arguments, one is more or less doing just that. Only the assignment happens more indirectly than the usage of the operator.
According to the Python Docs:
Semantically, they are just syntactic sugar for a normal function definition.
afaik, there are no special performance characteristics for lambda that makes it favourable. If you are thinking of using lambdas for complex tasks, think again, use functions.
Always use a def statement instead of an assignment statement that binds a lambda expression directly to an identifier.
Edit: Added StefanS' suggestion
In Clojure, the reason is so you can use the function in more than one place. In fact
(defn add2 [x] (+ x 2)
is just shorthand for
(def add2 (fn [x] (+ x 2))
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I know that PEP8 dictates to not assign lambda to an expression because it misses the entire point of a lambda function.
But what about a recursive lambda function? I've found that in many cases, it's really simple, clean and efficient to make a recursion with lambda assigning it to an expression instead of defining a function. And pep8 doesn't mention recursive lambda.
For example, let's compare a function that returns the greatest common divisor between two numbers:
def gcd(a,b):
if b == 0:
return a
return gcd(b, a % b)
vs
gcd = lambda a, b: a if b == 0 else gcd(b, a % b)
So, what should I do?
You have "cheated" a bit in your question, since the regular function could also be rewritten like this:
def gcd(a,b):
return a if b == 0 else gcd(b, a % b)
Almost as short as the lambda version, and even this can be further squeezed into a single line, but at the expense of readability.
The lambda syntax is generally used for simple anonymous functions that are normally passed as arguments to other functions. Assigning a lambda function to a variable doesn't make much sense, it is just another way of declaring a named function, but is less readable and more limited (you can't use statements in it).
A lot of thought has been put into PEP8 and the recommendations in it are there for a reason, thus I would not recommend deviating from it, unless you have a very good reason for doing so.
Go with the normal function definition. There's absolutely no benefit of using lambda over def, and normal function definition is (for most people) much more readable. With lambda, you gain nothing, but you often lose readability.
I would recommend you to read this answer. Recursion doesn't change anything. In fact, in my opinion, it favours normal def even more.
If you assign a lambda to a variable, you won't be able to pass it as an argument nor return it in the same line, which is the exact purpose of lambda.