Pythonic way around Enums [closed] - python

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What is the pythonic way to tell the caller of a function what values a given parameter supports?
He is an example for PyQt (for GUI). Say I have a checkbox,
class checkbox(object):
....
def setCheckState(self, value):
....
Here, setCheckState() should only expect checked or unchecked.
PyQt uses a built-in enumeration (i.e. Qt.Checked or Qt.Unchecked), but this is awful. I am constantly in the documentation looking for the enum for the object I am working with.
Obviously PyQt is written in an unpythonic C++ sytle. How should this or a similar problem be handled in Python? According to PEP 435, enums seem to be a recent addition to the language and for very specific applications, so I would assume there is/was a better way to handle this?
I want to make the code I write easy to use when my functions require specific parameter values--almost like a combobox for functions.

The One Obvious Way is function annotations.
class CheckBox(enum.Enum):
Off = 0
On = 1
def setCheckState(self, value: CheckBox):
...
This says quite clearly that value should be an instance of CheckBox. Having Enum just makes that a bit easier.
Annotations themselves aren't directly supported in 2.7, though. Common workarounds include putting that information in the function doc string (where various tools can find it) or in comments (as we already knew).
If looking for a method for your own code: use an annotating decorator. This has the advantage of continuing to work in 3+:
class annotate(object):
def __init__(self, **kwds):
self.kwds = kwds
def __call__(self, func):
func.__annotations__ = self.kwds
#annotate(value=CheckBox)
def setCheckState(self, value):
...
To be a robust decorator it should check that the contents of kwds matches the function parameter names.

That will do the trick
import collections
def create_enum(container, start_num, *enum_words):
return collections.namedtuple(container, enum_words)(*range(start_num, start_num + len(enum_words)))
Switch = create_enum('enums', 1, 'On', 'Off')
Switch is your enum:
In [20]: Switch.On
Out[20]: 1
In [21]: Switch.Off
Out[21]: 2
OK, I got the error of my ways - I mixed up representation with value.
Nevertheless, if you want to enumerate a larger range - in my fake approach you don't have to add values manually. Of course, if you have sequential numbers.
And I hate extra typing :-)

Related

Return Order of Magic Methods [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 months ago.
Improve this question
I've been programming for a couple of months now and now I actually have a question... So, if I am not completely wrong, there is just one return per call, right? Doesn't matter if its None or a certain return but there cant be like 2 return statements in one call? So lets move on to the magic methods. In which order do they get processed?
def __str__(self):
return f"{self.first} {self.second}"
def __repr__(self):
return "{} {}".format(self.first, self.second)
Always the last one? Or are there differences between certain magic methods in terms of ranking systems? Or do they even get both processed but just one becomes returned=?
There is no return order. Each magic method is a hook called by the Python implementation in order to implement specific protocols.
x.__str__ defines what str(x) means.
x.__repr__ defines what repr(x) means.
And that's it. Well, almost.
You also need to know when str or repr might be used aside from explicit calls. Some examples:
print calls str on each of its arguments to ensure that it has str values to write to the appropriate file.
The interactive interpreter calls repr on the value of each expression it evaluates.
In addition, object.__str__ falls back to use __repr__, I think by invoking x.__repr__() directly (rather than calling repr(x), which would then call x.__repr__()). So str(x) can indirectly be implemented using a __repr__ method if no class involved defined a __str__ method.
Other groups of magic methods might cooperate in order to define a more complicated protocol. For example,
x += y
could involve several options, tried in order:
x = x.__iadd__(y), if x.__iadd__ is defined
x = x.__add__(y), if x.__add__ is defined
x = y.__radd__(x), if x.__add__ is not defined or x.__add__(y) returned NonImplemented.

Bind the output of a method before using or call the method when you need its output? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
In both examples, class.method() returns a list.
Example A:
if class.method():
for i in class.method():
# do stuff
Example B
list = class.method()
if list:
for i in list:
# do stuff
Which is better? It would seem to me that in some languages (but I don't know which), example A would result in class.method() being needlessly evaluated twice, and example B would be best practice. However, perhaps other languages (again not knowing which) might retain the output of a method in memory in case that method is called again, therefore avoiding having to do the same evaluation twice and resulting in little difference between examples A and B. Is this so? If so, can you give examples of a language for each case? And the real reason for the question: which is best practice in Python?
Unless your Python interpreter has JIT capabilities, the method will be evaluated every time you call it.
And even when the JIT compilation is possible, methods have to be proven by the compiler / interpreter that they do not have any side effects, that is they are deterministic.
For example, consider a method that pulls data from a database or a method that contains a call to a random number generator:
import random
def method():
return random.uniform(0.0, 1.0)
Output of such a method cannot be saved in memory because the second time you call it, it may change.
On the other hand, getter methods that accumulate data are a great example of a deterministic method, given that they do not call a non-deterministic method in their body.
from dataclasses import dataclass
#dataclass
class Example:
a : list
b : list
def method(self):
return self.a + self.b
In practice, you are better of to not assume anything from the compiler / interpreter and do these small, easy to do optimizations yourself. You also have to consider that your code can be run on multiple platforms, which further complicates things.
So I would recommend you to call the method only once and save its output in a temporary variable:
result = class.method()
if result :
for i in result:
# do stuff
And given that it's Python, I recommend to ask for forgiveness with the try keyword if most of the time you run the method, its output is not None:
result = class.method()
try:
for i in result:
# do stuff
except TypeError:
pass

When is it appropriate to organize code using a class with Python? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
While I have a good amount of experience using Python, I found that sometimes it's quite difficult to determine if relevant functions and attributes should be put inside a class. More specifically, I have a function that uses the attributes of the class, and the following functions sequentially use the returned value of the previous function. For example Function 1 --> Function 2 --> Function 3 and so forth with each function returning something.
I wish to understand if it makes sense to use a class in situations like this as it is a common occurrence with me. I want to make sure that the object (sales table) is created in a way that's logical and clean.
So far I've created just a simple class with some attributes and instance methods. I'm not sure how else I can go about it. I have looked up numerous posts on Stacks, articles and many other resources. I believe I have a decent understanding the purpose of a class but less so on when it's appropriate to use it.
To be clear, I'm not asking for help on the functions themselves or their logic (although I appreciate any suggestions!). I just want to know if using a class is the way to go. I did not include any code within the functions as I don't think their logic is relevant to my question (I can add if necessary!)
class SalesTable:
def __init__(self, banner, start_year, start_month, end_year, end_month):
"""These attributes act as filters when searching for the relevant data."""
self.banner = banner
self.start_year = start_year
self.start_month = start_month
if not end_year:
self.end_year = start_year
else:
self.end_year = end_year
if not end_month:
self.end_month = start_month
else:
self.end_month = end_month
def sales_periods(self):
"""Will create a dict with a key as the year and each year will have a list of months as the value. The
stated attributes are used ONLY here as filters to determine what years and months are included"""
pass
def find_sales_period_csv(self):
"""Using the dictionary returned from the function above, will search through the relevant directories and
subdirectories to find all the paths for individual csvs where the sales data is stored as determined by the
value in the dictionary and store the paths in a list"""
pass
def csv_to_df(self):
"""Using the list returned from the function above, will take each csv path in the list and convert them into a
dataframe and store those dateframes in another list"""
pass
def combine_dfs(self):
"""Using the list return from the function above, will concatenate all dfs into a single dataframe"""
def check_data(self):
"""Maybe do some checking here to ensure all relevant data concatenated properly (i.e total row count etc.)"""
Ideally I like to return a sales table through the last function (combine_dfs) following the sequence of functions. I can accomplish this task quite easily however, I'm not sure this is the best way I should structure my script or if it logically makes sense, despite it working as I want.
Since only sales_periods actually uses the instance attributes, and it returns a dict, not another instance of SalesTable, all the other methods can be moved out of the class and defined as regular functions:
class SalesTable:
def __init__(self, banner, start_year, start_month, end_year, end_month):
...
def sales_periods(self):
# ...
return some_dict
def find_sales_period_csv(dct):
return some_list
def csv_to_df(lst):
return some_list
def combine_dfs(lst):
return some_df
def check_data(df):
pass
And you'll call them all in a chained fashion:
x = SalesTable(...)
check_data(combine_dfs(csv_to_df(find_sales_period_csv(x.sales_periods()))))
Now take a closer look at your class: you only have two methods, __init__ and sales_periods. Unless __init__ does something expensive that you don't want to repeat (and you would call sales_periods on the same instance multiple times), the entire class can be reduced to a single function that combines __init__ and the sales_period method:
def sales_periods(banner, start_year, start_month, end_year, end_month):
...
return some_dict
check_data(combine_dfs(csv_to_df(find_sales_period_csv(sales_periods(...)))))
Ideally there are two main uses for a class:
1) To prevent repetition. If you create the same object multiple times than it should be in a class.
2) To group things together. It is a lot easier to read someones code if all the related functions and attributes are grouped together. This also makes maintainability and portability easier.
It is common for methods to call each other within a class since methods should ideally not be longer than 30 lines (though different groups have different standards). If you are calling methods only from within a class than that method should be private and you should append __ (two underscores) before that method.
If a bunch of data and functions seem to live together, which is to say you typically refer to them both at the same time, then you have good reason to think you may have an object on your hands.
Another good reason is if there's a natural name for the object. Weird, I know, but it really is a useful guiding principle.
Reading up on SOLID may also give you some food for thought.
People new to OOP tend to create too many classes (I know I did in the beginning). One problem with that is code readability: when code uses a custom class, it's often necessary to read the class definition to figure out what the class is supposed to do. If the code uses built-in types only, it's usually easier to figure out. Also, the complex internal state that is a natural feature of classes often is a source of subtle bugs and makes code more difficult to reason about.
This book is quite helpful
Each of your methods above look like they relate to the class. So lets say you had defined a bunch of functions outside the class, and you were passing the same set of ten variables as arguments to each of them. That would be a sign that they should be in the class. Accessing and modifying too many variables and passing them to other functions as arguments instead of having them as class attributes which get modified inside each of the methods would be a sign that you had failed to take advantage of one of the benefits of classes. In that book, I remember a section where they went into detail about various signs that your code needs OOP.

"outsourcing" exception-handling to a decorator [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Many try/except/finally-clauses not only "uglify" my code, but i find myself often using identical exception-handling for similar tasks. So i was considering reducing redundancy by "outsourcing" them to a ... decorator.
Because i was sure not to be the 1st one to come to this conclusion, I googled and found this - imho - ingenious recipe which added the possibility to handle more than one exception.
But i was surprised why this doesn't seem to be a wide known and used practice per se, so i was wondering if there is maybe an aspect i wasn't considering?
Is it bogus to use the decorator pattern for exception-handling or did i just miss it the whole time? Please enlighten me! What are the pitfalls?
Is there maybe even a package/module out there which supports the creation of such exception-handling in a reasonable way?
The biggest reason to keep the try/except/finally blocks in the code itself is that error recovery is usually an integral part of the function.
For example, if we had our own int() function:
def MyInt(text):
return int(text)
What should we do if text cannot be converted? Return 0? Return None?
If you have many simple cases then I can see a simple decorator being useful, but I think the recipe you linked to tries to do too much: it allows a different function to be activated for each possible exception--in cases such as those (several different exceptions, several different code paths) I would recommend a dedicated wrapper function.
Here's my take on a simple decorator approach:
class ConvertExceptions(object):
func = None
def __init__(self, exceptions, replacement=None):
self.exceptions = exceptions
self.replacement = replacement
def __call__(self, *args, **kwargs):
if self.func is None:
self.func = args[0]
return self
try:
return self.func(*args, **kwargs)
except self.exceptions:
return self.replacement
and sample usage:
#ConvertExceptions(ValueError, 0)
def my_int(value):
return int(value)
print my_int('34') # prints 34
print my_int('one') # prints 0
Basically, the drawback is that you no longer get to decide how to handle the exception in the calling context (by just letting the exception propagate). In some cases this may result in a lack of separation of responsibility.
Decorator in Python is not the same as the Decorator pattern, thought there is some similarity. It is not completely clear waht you mean here, but I think you mean the one from Python (thus, it is better not to use the word pattern)
Decorators from Python are not that useful for exception handling, because you would need to pass some context to the decorator. That is, you would either pass a global context, or hide function definitions within some outer context, which requires, I would say, LISP-like way of thinking.
Instead of decorators you can use contextmanagers. And I do use them for that purpose.

Python Generator Function Names -- is a prefix helpful? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Most functions are easy to name. Generally, a function name is based on what it does or the type of result it produces.
In the case of a generator function, however, the result could be a iterable over some other type.
def sometype( iterable ):
for x in iterable:
yield some_transformation( x )
The sometype name feels misleading, since the function doesn't return an object of the named type. It's really an iterable over sometype.
A name like iter_sometype or gen_sometype feels a bit too much like Hungarian Notation. However, it also seems to clarify the intent of the function.
Going further, there are a number of more specialized cases, where a prefix might be helpful.
These are typical examples, some of which are available in itertools. However, we often have to write a version that's got some algorithmic complexity that makes it
similar to itertools, but not a perfect fit.
def reduce_sometype( iterable ):
summary = sometype()
for x in iterable:
if some_rule(x):
yield summary
summary= sometype()
summary.update( x )
def map_sometype( iterable ):
for x in iterable:
yield some_complex_mapping( x )
def filter_sometype( iterable ):
for x in iterable:
if some_complex_rule(x):
yield x
Does the iter_, map_, reduce_, filter_ prefix help clarify the name of a generator function? Or is it just visual clutter?
If a prefix is helpful, what prefix suggestions do you have?
Alternatively, if a suffix is helpful, what suffix suggestions do you have?
Python dicts have iter* methods. And lxml trees also have an iter method.
Reading
for node in doc.iter():
seems familiar, so
following that pattern, I'd consider naming the a generator of sometypes sometypes_iter
so that I could write analgously,
for item in sometypes_iter():
Python provides a sorted function.
Following that pattern, I might make the verb-functions past tense:
sometypes_reduced
sometypes_mapped
sometypes_filtered
If you have enough of these functions, it might make sense to make a SomeTypes class so the method names could be shortened to reduce, map, and filter.
If the functions can be generalized to accept or return types other than sometype, then of course it would make sense to remove sometype from the function name, and instead choose a name that emphasizes what it does rather than the types.

Categories

Resources