Python class constant using class init method? - python

I've made a class which can be compared and sorted inside common data structures.
The thing is that I wanted to make two class constants for the maximum and minimum values that class can take. So I could call this value just importing MyClass and writing
obj = MyClass.MY_MAX_CONSTANT
The thing is that calling the constructor or init method to initialize these constants is not allowed.
In Java this would be declared as static and it would work, but I don't know how can I do a class / static constant in Python using the constructor / init method. Haven't found much googling but some general recipes for constants and suggestions for making properties.
I don't need a mechanism to avoid changing the constant value since I'm definitely not changing it.
My first try was:
class MyClass(object):
MY_MAX_CONSTANT = MyClass(10,10)
MY_MIN_CONSTANT = MyClass(0,0)
def __init__(self, param1, param2): # Not the exact signature, but I think this works as an example
# We imagine some initialization work here
self.x = param1
self.y = param2
# SORT FUNCTIONS
def __cmp__(self, other):
# Implementation already made here
def __eq__(self, other):
# Implementation already made here
def __ne__(self, other):
# Implementation already made here
def __ge__(self, other):
# Implementation already made here
# And so on...
A second try, by using some functions for each constant:
class MyClass(object):
def __init__(self, param1, param2): # Not the exact signature, but I think this works as an example
# We imagine some initialization work here
self.x = param1
self.y = param2
MY_MAX_CONSTANT = None
MY_MIN_CONSTANT = None
#staticmethod
def get_max(self):
if not MyClass.MY_MAX_CONSTANT:
MyClass.MY_MAX_CONSTANT = MyClass(10,10)
return MyClass.MY_MAX_CONSTANT
#staticmethod
def get_min(self):
if not MyClass.MY_MIN_CONSTANT:
MyClass.MY_MIN_CONSTANT = MyClass(0,0)
return MyClass.MY_MIN_CONSTANT
# SORT FUNCTIONS (I'm not writing them twice for spacing)
But I wanted to avoid strange function mechanisms only for making two constants.
I prefer the constant being in the class and not a module because it feels more natural to me, but I'm hearing any advice or suggestion. Can anyone point me a better pythonic solution?
Thanks

Add your constants after creating your class, you are allowed to add more class attributes:
class MyClass:
# ...
MyClass.MY_MAX_CONSTANT = MyClass(10, 10)
MyClass.MY_MIN_CONSTANT = MyClass(0, 0)
Only when the class statement has completed running is the class object available, bound to the name MyClass. You can't create instances before this point.

Related

Python post-class-construction hook

I have a need to run some code at class creation time, invoking a function (in this case it happens to be a method) that I need to pass the cls class object and a few other things (mostly defined in the parent).
My solution so far is this:
#PostConstruct()
class Child(Parent):
X = 1
Y = Parent.A
Z = 2
#classmethod
def __post_construct__(cls):
cls.add_thing(cls.X, as_key=True, before=cls.Y)
cls.add_thing(cls.Z, as_key=False, before=cls.Y)
Supporting code:
class PostConstruct:
"""
Runs a class's ``__post_construct__`` class method immediately after
the body code of the class is run. Allows an author to make small
modifications to the class (e.g. modifying class-level variables) at
class creation time.
"""
def __call__(self, cls):
cls.__post_construct__()
return cls
class Parent:
A = 0
#classmethod
def add_thing(cls, thing, as_key, before):
print("Adding thing...")
Is there some built-in post-class-construction hook method I've overlooked, so I wouldn't need to write this decorator myself? I've looked at https://docs.python.org/3/reference/datamodel.html#customizing-class-creation but I haven't seen anything that seems relevant. But this wouldn't be the first time I've implemented some "clever" thing and then learned later that I could have done it simpler.
Or any other suggestion to achieve a similar result?
Thanks, all. Putting together the suggested tweaks from the comments, here's how I'm moving forward:
#post_construct()
class Child(Parent):
X = 1
Y = Parent.A
Z = 2
#classmethod
def _post_construct(cls):
cls.add_thing(cls.X, as_key=True, before=cls.Y)
cls.add_thing(cls.Z, as_key=False, before=cls.Y)
Supporting code:
def post_construct(cls):
"""
Runs a class's ``_post_construct`` class method immediately after the body code of the class is run. Allows an author to make
small modifications to the class (e.g. modifying class-level variables) at class creation time.
"""
cls._post_construct()
return cls
class Parent:
A = 0
#classmethod
def add_thing(cls, thing, as_key, before):
print("Adding thing...")

Is it possible to define a static class member with type of the class it is in?

How can I define a static class member with the type of this same class it is in? I intuitively tried it like this:
class A:
def __init__(self,a,b,c,d):
...
default_element = A(1,2,3,4)
Which gives the error
name 'A' is not defined
It would make the code for setting/resetting short and organized.
There are workarounds such as
class A:
def __init__(self,a=1,b=2,c=3,d=4):
...
or
class A:
def __init__(self,a,b,c,d):
...
#staticmethod
def getDefault():
return A(1,2,3,4)
but I would prefer the default element if possible, so we actually have an object representing the default, instead of a method and you can only have one set of default values, while with the prefered option I could have multiple different template-objects.
I'm on Python 3.6.9.
As far as I understand, this is not easily possible: (see "One Gotcha to be Aware of" in https://stackoverflow.com/a/27568860/1256837)
But there is a slight workaround and I focus on your requirement to provide different templates:
class _A:
def __init__(self, a=None):
self.a = a
def blubb(self):
return self.a**2
class A(_A):
template_default = _A(0)
template_1 = _A(5)
assert A.template_default.blubb() == 0
assert A.template_1.blubb() == 25
assert A(50).blubb() == 2500
However, I think this code smells: I don't think that this is a good practice.
But If you were to put this class in a python module ... then in your definition you can create module wide "constants" that would match different instantiations of your class. That, I suppose, would be a better approach.

Python mixins and types, dependencies

I work on a project with a huge class. Initially they were implemented as functions that just get imported, like in this answer:
def plot(self, x, y):
print(self.field)
def clear(self):
# body
#classmethod
def from_file(cls, path):
# body
class Fitter(object):
def __init__(self, whatever):
self.field = whatever
# Imported methods
from .some_file import plot, clear, from_file
...
But I think it's not the best solution, IDE is mad on the code because it cannot find field in the external methods and considers classmethod on some functions as an error. I hope mixins can help with it.
But I see the similar problem in this approach: mixin classes don't have a base class with all the common methods and fields specified (an example in Django), so IDE and linters cannot find definitions and understand the code correctly too... I tried to use some common base class, as the following:
class FitterBase(object):
def __init__(self, whatever):
self.field = whatever
class FiterMixin(FitterBase):
def plot(self, x, y):
print(self.field)
def clear(self):
# body
#classmethod
def from_file(cls, path):
# body
class Fitter(FitterBase, FiterMixin):
pass
But the interpreter raises an error:
TypeError: Cannot create a consistent method resolution
Is there any solution to this problem? It's really important because the class contains dozens of big methods, and correct inheritance would help a lot.
Python is trying to construct an MRO for Fitter in which FitterBase both precedes and follows FitterMixin; the former because FitterBase is listed first in the seance of base classes, the latter because it is the parent of FitterMixin.
To resolve that issue, simply swap the order of the two base classes:
class Fitter(FitterMixin, FitterBase):
pass
There's no reason to list both, though, because FitterMixin already inherits from FitterBase:
class Fitter(FitterMixin):
pass
As this makes more obvious, FitterMixin isn't really a mix-in class, because you aren't mixing it with anything. Or, don't make FitterMixin subclass FitterBase:
class FitterBase:
def __init__(self, whatever):
self.field = whatever
class FitterMixin:
def plot(self, x, y):
print(self.field)
def clear(self):
pass
#classmethod
def from_file(cls, path):
pass
class Fitter(FitterBase, FitterMixin):
pass

Why is it considered bad practice to hardcode the name of a class inside that class's methods?

In python, why is it a bad thing to do something like this:
class Circle:
pi = 3.14159 # class variable
def __init__(self, r = 1):
self.radius = r
def area(self):
return Circle.pi * squared(self.radius)
def squared(base): return pow(base, 2)
The area method could be defined as follows:
def area(self): return self.__class__.pi * squared(self.radius)
which is, unless I'm very much mistaken, considered a better way to reference a class variable. The question is why? Intuitively, I don't like it but I don't seem to completely understand this.
Because in case you subclass the class it will no longer refer to the class, but its parent. In your case it really doesn't make a difference, but in many cases it does:
class Rectangle(object):
name = "Rectangle"
def print_name(self):
print(self.__class__.name) # or print(type(self).name)
class Square(Rectangle):
name = "Square"
If you instantiate Square and then call its print_name method, it'll print "Square". If you'd use Rectangle.name instead of self.__class__.name (or type(self).name), it'd print "Rectangle".
Why is it considered bad practice to hardcode the name of a class inside that class's methods?
It's not. I don't know why you think it is.
There are plenty of good reasons to hardcode the name of a class inside its methods. For example, using super on Python 2:
super(ClassName, self).whatever()
People often try to replace this with super(self.__class__, self).whatever(), and they are dead wrong to do so. The first argument must be the actual class the super call occurs in, not self.__class__, or the lookup will find the wrong method.
Another reason to hardcode the class name is to avoid overrides. For example, say you've implemented one method using another, as follows:
class Foo(object):
def big_complicated_calculation(self):
return # some horrible mess
def slightly_different_calculation(self):
return self.big_complicated_calculation() + 2
If you want slightly_different_calculation to be independent of overrides of big_complicated_calculation, you can explicitly refer to Foo.big_complicated_calculation:
def slightly_different_calculation(self):
return Foo.big_complicated_calculation(self) + 2
Even when you do want to pick up overrides, it's usually better to change ClassName.whatever to self.whatever instead of self.__class__.whatever.
I can name here two reasons
Inheritance
class WeirdCircle(Circle):
pi = 4
c = WeirdCircle()
print(c.area())
# returning 4 with self.__class__.pi
# and 3.14159 with Circle.pi
When you want to rename the class, there is only one spot to modify.
Zen of python says keep your code as simple as possible to make it readable. Why to get into using the class name or super. If you just use self then it will refer the relevant class and print its relevant variable. Refer below code.
class Rectangle(object):
self.name = "Rectangle"
def print_name(self):
print(self.name)
class Square(Rectangle):
name = 'square'
sq = Square()
sq.print_name

Is there a benefit to defining a class inside another class in Python?

What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
I don't know Python, but your question seems very general. Ignore me if it's specific to Python.
Class nesting is all about scope. If you think that one class will only make sense in the context of another one, then the former is probably a good candidate to become a nested class.
It is a common pattern make helper classes as private, nested classes.
There is another usage for nested class, when one wants to construct inherited classes whose enhanced functionalities are encapsulated in a specific nested class.
See this example:
class foo:
class bar:
... # functionalities of a specific sub-feature of foo
def __init__(self):
self.a = self.bar()
...
... # other features of foo
class foo2(foo):
class bar(foo.bar):
... # enhanced functionalities for this specific feature
def __init__(self):
foo.__init__(self)
Note that in the constructor of foo, the line self.a = self.bar() will construct a foo.bar when the object being constructed is actually a foo object, and a foo2.bar object when the object being constructed is actually a foo2 object.
If the class bar was defined outside of class foo instead, as well as its inherited version (which would be called bar2 for example), then defining the new class foo2 would be much more painful, because the constuctor of foo2 would need to have its first line replaced by self.a = bar2(), which implies re-writing the whole constructor.
You could be using a class as class generator. Like (in some off the cuff code :)
class gen(object):
class base_1(object): pass
...
class base_n(object): pass
def __init__(self, ...):
...
def mk_cls(self, ..., type):
'''makes a class based on the type passed in, the current state of
the class, and the other inputs to the method'''
I feel like when you need this functionality it will be very clear to you. If you don't need to be doing something similar than it probably isn't a good use case.
There is really no benefit to doing this, except if you are dealing with metaclasses.
the class: suite really isn't what you think it is. It is a weird scope, and it does strange things. It really doesn't even make a class! It is just a way of collecting some variables - the name of the class, the bases, a little dictionary of attributes, and a metaclass.
The name, the dictionary and the bases are all passed to the function that is the metaclass, and then it is assigned to the variable 'name' in the scope where the class: suite was.
What you can gain by messing with metaclasses, and indeed by nesting classes within your stock standard classes, is harder to read code, harder to understand code, and odd errors that are terribly difficult to understand without being intimately familiar with why the 'class' scope is entirely different to any other python scope.
A good use case for this feature is Error/Exception handling, e.g.:
class DownloadManager(object):
class DowndloadException(Exception):
pass
def download(self):
...
Now the one who is reading the code knows all the possible exceptions related to this class.
Either way, defined inside or outside of a class, would work. Here is an employee pay schedule program where the helper class EmpInit is embedded inside the class Employee:
class Employee:
def level(self, j):
return j * 5E3
def __init__(self, name, deg, yrs):
self.name = name
self.deg = deg
self.yrs = yrs
self.empInit = Employee.EmpInit(self.deg, self.level)
self.base = Employee.EmpInit(self.deg, self.level).pay
def pay(self):
if self.deg in self.base:
return self.base[self.deg]() + self.level(self.yrs)
print(f"Degree {self.deg} is not in the database {self.base.keys()}")
return 0
class EmpInit:
def __init__(self, deg, level):
self.level = level
self.j = deg
self.pay = {1: self.t1, 2: self.t2, 3: self.t3}
def t1(self): return self.level(1*self.j)
def t2(self): return self.level(2*self.j)
def t3(self): return self.level(3*self.j)
if __name__ == '__main__':
for loop in range(10):
lst = [item for item in input(f"Enter name, degree and years : ").split(' ')]
e1 = Employee(lst[0], int(lst[1]), int(lst[2]))
print(f'Employee {e1.name} with degree {e1.deg} and years {e1.yrs} is making {e1.pay()} dollars')
print("EmpInit deg {0}\nlevel {1}\npay[deg]: {2}".format(e1.empInit.j, e1.empInit.level, e1.base[e1.empInit.j]))
To define it outside, just un-indent EmpInit and change Employee.EmpInit() to simply EmpInit() as a regular "has-a" composition. However, since Employee is the controller of EmpInit and users don't instantiate or interface with it directly, it makes sense to define it inside as it is not a standalone class. Also note that the instance method level() is designed to be called in both classes here. Hence it can also be conveniently defined as a static method in Employee so that we don't need to pass it into EmpInit, instead just invoke it with Employee.level().

Categories

Resources