I have a need to run some code at class creation time, invoking a function (in this case it happens to be a method) that I need to pass the cls class object and a few other things (mostly defined in the parent).
My solution so far is this:
#PostConstruct()
class Child(Parent):
X = 1
Y = Parent.A
Z = 2
#classmethod
def __post_construct__(cls):
cls.add_thing(cls.X, as_key=True, before=cls.Y)
cls.add_thing(cls.Z, as_key=False, before=cls.Y)
Supporting code:
class PostConstruct:
"""
Runs a class's ``__post_construct__`` class method immediately after
the body code of the class is run. Allows an author to make small
modifications to the class (e.g. modifying class-level variables) at
class creation time.
"""
def __call__(self, cls):
cls.__post_construct__()
return cls
class Parent:
A = 0
#classmethod
def add_thing(cls, thing, as_key, before):
print("Adding thing...")
Is there some built-in post-class-construction hook method I've overlooked, so I wouldn't need to write this decorator myself? I've looked at https://docs.python.org/3/reference/datamodel.html#customizing-class-creation but I haven't seen anything that seems relevant. But this wouldn't be the first time I've implemented some "clever" thing and then learned later that I could have done it simpler.
Or any other suggestion to achieve a similar result?
Thanks, all. Putting together the suggested tweaks from the comments, here's how I'm moving forward:
#post_construct()
class Child(Parent):
X = 1
Y = Parent.A
Z = 2
#classmethod
def _post_construct(cls):
cls.add_thing(cls.X, as_key=True, before=cls.Y)
cls.add_thing(cls.Z, as_key=False, before=cls.Y)
Supporting code:
def post_construct(cls):
"""
Runs a class's ``_post_construct`` class method immediately after the body code of the class is run. Allows an author to make
small modifications to the class (e.g. modifying class-level variables) at class creation time.
"""
cls._post_construct()
return cls
class Parent:
A = 0
#classmethod
def add_thing(cls, thing, as_key, before):
print("Adding thing...")
Related
Like the question posted here, I want to create a class that inherits from another class passed as an argument.
class A():
def __init__(self, args):
stuff
class B():
def __init__(self, args):
stuff
class C():
def __init__(self, cls, args):
self.inherit(cls, args)
args = #arguments to create instances of A and B
class_from_A = C(A, args) #instance of C inherited from A
class_from_B = C(B, args) #instance of C inherited from B
I want to do this so that I can keep track of calls I make to different web api's. The thought is that I am just adding my own functionality to any api-type object. The problem with the solution to the linked question is that I don't want to have to go through the additional "layer" to use the api-type object. I want to say obj.get_data() instead of obj.api.get_data().
I've tried looking into how super() works but haven't came across anything that would help (although I could've easily missed something). Any help would be nice, and I'm open to any other approaches for what I'm trying to do, however, just out of curiosity I'd like to know if this is possible.
I don't think it's possible because __init__ is called after __new__ which is where you would specify base classes, but I think you can achieve your goal of tracking api calls using a metaclass. Since you didn't give any examples of what tracking the calls means, I'll leave you with an example metaclass which counts method calls. You can adapt it to your needs.
Another alternative would be to subclass A and B with methods that track whatever you need, and just return super().whatever(). I think I'd prefer that method unless A and B contain too many methods worth managing like that.
Here's an implementation from python-course.eu, by Bernd Klein. Click the link for more detail.
class FuncCallCounter(type):
""" A Metaclass which decorates all the methods of the
subclass using call_counter as the decorator
"""
#staticmethod
def call_counter(func):
""" Decorator for counting the number of function
or method calls to the function or method func
"""
def helper(*args, **kwargs):
helper.calls += 1
return func(*args, **kwargs)
helper.calls = 0
helper.__name__= func.__name__
return helper
def __new__(cls, clsname, superclasses, attributedict):
""" Every method gets decorated with the decorator call_counter,
which will do the actual call counting
"""
for attr in attributedict:
if callable(attributedict[attr]) and not attr.startswith("__"):
attributedict[attr] = cls.call_counter(attributedict[attr])
return type.__new__(cls, clsname, superclasses, attributedict)
This is a feature I miss in several languages and wonder if anyone has any idea how it can be done in Python.
The idea is that I have a base class:
class Base(object):
def __init__(self):
self.my_data = 0
def my_rebind_function(self):
pass
and a derived class:
class Child(Base):
def __init__(self):
super().__init__(self)
# Do some stuff here
self.my_rebind_function() # <==== This is the line I want to get rid of
def my_rebind_function(self):
# Do stuff with self.my_data
As can be seen above, I have a rebound function which I want called after the Child.__init__ has done its job. And I want this done for all inherited classes, so it would be great if it was performed by the base class, so I do not have to retype that line in every child class.
It would be nice if the language had a function like __finally__, operating similar to how it operates with exceptions. That is, it should run after all __init__-functions (of all derived classes) have been run, that would be great. So the call order would be something like:
Base1.__init__()
...
BaseN.__init__()
LeafChild.__init__()
LeafChild.__finally__()
BaseN.__finally__()
...
Base1.__finally__()
And then object construction is finished. This is also kind of similar to unit testing with setup, run and teardown functions.
You can do this with a metaclass like that:
class Meta(type):
def __call__(cls, *args, **kwargs):
print("start Meta.__call__")
instance = super().__call__(*args, **kwargs)
instance.my_rebind_function()
print("end Meta.__call__\n")
return instance
class Base(metaclass=Meta):
def __init__(self):
print("Base.__init__()")
self.my_data = 0
def my_rebind_function(self):
pass
class Child(Base):
def __init__(self):
super().__init__()
print("Child.__init__()")
def my_rebind_function(self):
print("Child.my_rebind_function")
# Do stuff with self.my_data
self.my_data = 999
if __name__ == '__main__':
c = Child()
print(c.my_data)
By overwriting Metaclass.__call__ you can hook after all __init__ ( and __new__) methods of the class-tree have run an before the instance is returned. This is the place to call your rebind function. To understand the call order i added some print statements. The output will look like this:
start Meta.__call__
Base.__init__()
Child.__init__()
Child.my_rebind_function
end Meta.__call__
999
If you want to read on and get deeper into details I can recommend following great article: https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/
I may still not fully understand, but this seems to do what I (think) you want:
class Base(object):
def __init__(self):
print("Base.__init__() called")
self.my_data = 0
self.other_stuff()
self.my_rebind_function()
def other_stuff(self):
""" empty """
def my_rebind_function(self):
""" empty """
class Child(Base):
def __init__(self):
super(Child, self).__init__()
def other_stuff(self):
print("In Child.other_stuff() doing other stuff I want done in Child class")
def my_rebind_function(self):
print("In Child.my_rebind_function() doing stuff with self.my_data")
child = Child()
Output:
Base.__init__() called
In Child.other_stuff() doing other stuff I want done in Child class
In Child.my_rebind_function() doing stuff with self.my_data
If you want a "rebind" function to be invoked after each instance of a type which inherits from Base is instantiated, then I would say this "rebind" function can live outside the Base class(or any class inheriting from it).
You can have a factory function that gives you the object you need when you invoke it(for example give_me_a_processed_child_object()). This factory function basically instantiates an object and does something to it before it returns it to you.
Putting logic in __init__ is not a good idea because it obscures logic and intention. When you write kid = Child(), you don't expect many things to happen in the background, especially things that act on the instance of Child that you just created. What you expect is a fresh instance of Child.
A factory function, however, transparently does something to an object and returns it to you. This way you know you're getting an already processed instance.
Finally, you wanted to avoid adding "rebind" methods to your Child classes which you now you can since all that logic can be placed in your factory function.
I hope you are doing great. This questions is really about getting rid of the reference to base class.
Basically I want to collect all methods of a child class methods at the class level instead of the instance level, using a parent classmethod. However, I was told that the base class name is really long.
The first piece works but is really annoying because of the long name. Even in the clean version I have to do A.eat everytime.
I promise people won't define another method "eat" in any child like B. Can I actually get rid of the base class reference so that I can use #eat?
class IDontWantToDoThisButNameHasToBeThisLong(object):
a = []
#classmethod
def eat(cls, func):
cls.a.append(func)
class B(IDontWantToDoThisButNameHasToBeThisLong):
#IDontWantToDoThisButNameHasToBeThisLong.eat
def apple( self, x ):
print x
IDontWantToDoThisButNameHasToBeThisLong.eat( lambda x: x+1 )
x = B()
IDontWantToDoThisButNameHasToBeThisLong.a[0](x, 1)
print IDontWantToDoThisButNameHasToBeThisLong.a[1](1)
Clean version:
class A(object):
a = []
#classmethod
def eat(cls, func):
cls.a.append(func)
class B(A):
#A.eat
def apple( self, x ):
print x
A.eat( lambda x: x+1 )
x = B()
A.a[0](x, 1)
print A.a[1](1)
Sincerely,
The class IDontWantToDoThisButNameHasToBeThisLong is really just an object. In python, most thingsa are an object, so we can assign just about anything to a variable, including a class.
What you could do here is something like the following
class IDontWantToDoThisButNameHasToBeThisLong(object):
a = []
#classmethod
def eat(cls, func):
cls.a.append(func)
A = IDontWantToDoThisButNameHasToBeThisLong
class B(A):
#A.eat
def apple( self, x ):
print x
A.eat( lambda x: x+1 )
x = B()
IDontWantToDoThisButNameHasToBeThisLong.a[0](x, 1)
A.a[0](x, 1)
print IDontWantToDoThisButNameHasToBeThisLong.a[1](1)
There's no perfect solution for what you want to do, but there are a few different approaches that might be good enough.
To start with the simplest, you could give your long class a shorter name before using class method in the child classes:
class IDontWantToDoThisButNameHasToBeThisLong(object):
...
A = IDontWantToDoThisButNameHasToBeThisLong
# later code can use A.whatever()
Another option would be to move the decorator out of the class with the long name, so that your later code would refer to it directly as a global, rather than a class method. This would require it to be slightly redesigned (which might break things if you ever intend for there to be multiple different a lists that are accessed through the same decorator called via different classes):
class IDontWantToDoThisButNameHasToBeThisLong(object):
a = []
def eat(func):
IDontWantToDoThisButNameHasToBeThisLong.a.append(func) # only need to use the name once
return func # I suspect you want this too (a decorator should return a callable)
# later code can use #eat as a decorator, without referring to the long class name
A hybrid of those two approaches might be to leave the existing class method definition intact, but to create another global name for the bound method that's easier to access:
eat = IDontWantToDoThisButNameHasToBeThisLong.eat
A final possible approach would be to use fancier programming with metaclasses, or (if you're using Python 3.6) __init_subclass__ or similar, to achieve the goal you have in mind without needing to use a class method as a decorator. I'm not going to include code for that, since the best way to do this probably depends on more details of your design than what you've show in your example.
I want to be able to create an instance of a parent class X, with a string "Q" as an extra argument.
This string is to be a name being an identifier for a subclass Q of the parent class X.
I want the instance of the parent class to become (or be replaced with) an instance of the subclass.
I am aware that this is probably a classic problem (error?). After some searching I haven't found a suitable solution though.
I came up with the following solution myself;
I added a dictionary of possible identifiers as keys for their baseclass-instances to the init-method of the parent class.
Then assigned the class-attribute of the corresponding subclass to the current instances class-attribute.
I required the argument of the init-method not to be the default value to prevent infinite looping.
Following is an example of what the code looks like in practice;
class SpecialRule:
""""""
name="Special Rule"
description="This is a Special Rule."
def __init__(self, name=None):
""""""
print "SpecialInit"
if name!=None:
SPECIAL_RULES={
"Fly" : FlyRule(),
"Skirmish" : SkirmishRule()
} #dictionary coupling names to SpecialRuleclasses
self.__class__= SPECIAL_RULES[name].__class__
def __str__(self):
""""""
return self.name
class FlyRule(SpecialRule):
""""""
name="Fly"
description="Flies."
def __init__(self):
""""""
print "FlyInit"+self.name
SpecialRule.__init__(self)
def addtocontainer(self, container):
"""this instance messes with the attributes of its containing class when added to some sort of list"""
class SkirmishRule(SpecialRule):
""""""
name="Skirmish"
description="Skirmishes."
def __init__(self):
""""""
SpecialRule.__init__(self)
def addtocontainer(self, container):
"""this instance messes with the attributes of its containing class when added to some sort of list"""
test=SpecialRule("Fly")
print "evaluating resulting class"
print test.description
print test.__class__
</pre></code>
output:
>
SpecialInit
FlyInitFly
SpecialInit
evaluating resulting class
Flies.
main.FlyRule
>
Is there a more pythonic solution and are there foresee-able problems with mine?
(And am I mistaken that its a good programming practice to explicitly call the .__init__(self) of the parent class in .__init__ of the subclass?).
My solution feels a bit ... wrong ...
Quick recap so far;
Thanks for the quick answers
# Mark Tolonen's solution
I've been looking into the __new__-method, but when I try to make A, B and C in Mark Tolonen's example subclasses of Z, I get the error that class Z isn't defined yet. Also I'm not sure if instantiating class A the normal way ( with variable=A() outside of Z's scope ) is possible, unless you already have an instance of a subclass made and call the class as an attribute of an instance of a subclass of Z ... which doesn't seem very straightforward. __new__ is quite interesting so I'll fool around with it a bit more, your example is easier to grasp than what I got from the pythondocs.
# Greg Hewgill's solution
I tried the staticmethod-solution and it seems to work fine. I looked into using a seperate function as a factory before but I guessed it would get hard to manage a large program with a list of loose strands of constructor code in the main block, so I'm very happy to integrate it in the class.
I did experiment a bit seeing if I could turn the create-method into a decorated .__call__() but it got quite messy so I'll leave it at that.
I would solve this by using a function that encapsulates the choice of object:
class SpecialRule:
""""""
name="Special Rule"
description="This is a Special Rule."
#staticmethod
def create(name=None):
""""""
print "SpecialCreate"
if name!=None:
SPECIAL_RULES={
"Fly" : FlyRule,
"Skirmish" : SkirmishRule
} #dictionary coupling names to SpecialRuleclasses
return SPECIAL_RULES[name]()
else:
return SpecialRule()
I have used the #staticmethod decorator to allow you to call the create() method without already having an instance of the object. You would call this like:
SpecialRule.create("Fly")
Look up the __new__ method. It is the correct way to override how a class is created vs. initialized.
Here's a quick hack:
class Z(object):
class A(object):
def name(self):
return "I'm A!"
class B(object):
def name(self):
return "I'm B!"
class C(object):
def name(self):
return "I'm C!"
D = {'A':A,'B':B,'C':C}
def __new__(cls,t):
return cls.D[t]()
What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
I don't know Python, but your question seems very general. Ignore me if it's specific to Python.
Class nesting is all about scope. If you think that one class will only make sense in the context of another one, then the former is probably a good candidate to become a nested class.
It is a common pattern make helper classes as private, nested classes.
There is another usage for nested class, when one wants to construct inherited classes whose enhanced functionalities are encapsulated in a specific nested class.
See this example:
class foo:
class bar:
... # functionalities of a specific sub-feature of foo
def __init__(self):
self.a = self.bar()
...
... # other features of foo
class foo2(foo):
class bar(foo.bar):
... # enhanced functionalities for this specific feature
def __init__(self):
foo.__init__(self)
Note that in the constructor of foo, the line self.a = self.bar() will construct a foo.bar when the object being constructed is actually a foo object, and a foo2.bar object when the object being constructed is actually a foo2 object.
If the class bar was defined outside of class foo instead, as well as its inherited version (which would be called bar2 for example), then defining the new class foo2 would be much more painful, because the constuctor of foo2 would need to have its first line replaced by self.a = bar2(), which implies re-writing the whole constructor.
You could be using a class as class generator. Like (in some off the cuff code :)
class gen(object):
class base_1(object): pass
...
class base_n(object): pass
def __init__(self, ...):
...
def mk_cls(self, ..., type):
'''makes a class based on the type passed in, the current state of
the class, and the other inputs to the method'''
I feel like when you need this functionality it will be very clear to you. If you don't need to be doing something similar than it probably isn't a good use case.
There is really no benefit to doing this, except if you are dealing with metaclasses.
the class: suite really isn't what you think it is. It is a weird scope, and it does strange things. It really doesn't even make a class! It is just a way of collecting some variables - the name of the class, the bases, a little dictionary of attributes, and a metaclass.
The name, the dictionary and the bases are all passed to the function that is the metaclass, and then it is assigned to the variable 'name' in the scope where the class: suite was.
What you can gain by messing with metaclasses, and indeed by nesting classes within your stock standard classes, is harder to read code, harder to understand code, and odd errors that are terribly difficult to understand without being intimately familiar with why the 'class' scope is entirely different to any other python scope.
A good use case for this feature is Error/Exception handling, e.g.:
class DownloadManager(object):
class DowndloadException(Exception):
pass
def download(self):
...
Now the one who is reading the code knows all the possible exceptions related to this class.
Either way, defined inside or outside of a class, would work. Here is an employee pay schedule program where the helper class EmpInit is embedded inside the class Employee:
class Employee:
def level(self, j):
return j * 5E3
def __init__(self, name, deg, yrs):
self.name = name
self.deg = deg
self.yrs = yrs
self.empInit = Employee.EmpInit(self.deg, self.level)
self.base = Employee.EmpInit(self.deg, self.level).pay
def pay(self):
if self.deg in self.base:
return self.base[self.deg]() + self.level(self.yrs)
print(f"Degree {self.deg} is not in the database {self.base.keys()}")
return 0
class EmpInit:
def __init__(self, deg, level):
self.level = level
self.j = deg
self.pay = {1: self.t1, 2: self.t2, 3: self.t3}
def t1(self): return self.level(1*self.j)
def t2(self): return self.level(2*self.j)
def t3(self): return self.level(3*self.j)
if __name__ == '__main__':
for loop in range(10):
lst = [item for item in input(f"Enter name, degree and years : ").split(' ')]
e1 = Employee(lst[0], int(lst[1]), int(lst[2]))
print(f'Employee {e1.name} with degree {e1.deg} and years {e1.yrs} is making {e1.pay()} dollars')
print("EmpInit deg {0}\nlevel {1}\npay[deg]: {2}".format(e1.empInit.j, e1.empInit.level, e1.base[e1.empInit.j]))
To define it outside, just un-indent EmpInit and change Employee.EmpInit() to simply EmpInit() as a regular "has-a" composition. However, since Employee is the controller of EmpInit and users don't instantiate or interface with it directly, it makes sense to define it inside as it is not a standalone class. Also note that the instance method level() is designed to be called in both classes here. Hence it can also be conveniently defined as a static method in Employee so that we don't need to pass it into EmpInit, instead just invoke it with Employee.level().