is it a good design to create object of child class in parent like the example below, it seems to be working but is it a good design, is there a better way to do that?
class parent(object):
def __init__(self):
print('Im running')
def execute(self):
x = child()
x.run()
x.myfun()
def myfun(self):
print('parent function')
def run(self):
print('parent running')
class child(parent):
def __init__(self):
super().__init__()
print('Im running too')
def run(self):
print('child running')
f = parent()
f.execute()
This is definitely not a good design for your problem, and not a good design generally (bar exceptions which I cannot think of any), and is definitely against OOP design and SOLID principles.
Simply in OOP design, or any other software engineering frame of mind, you want clear relations. This makes the relationship between your parent class and your child class inherently more complex. Not to mention most of the other languages (at least languages which run complied code) would not allow such thing to happen.
If you need to have an instance of one in the other and vice versa, maybe inheritance was the wrong pattern to begin with, since your classes seem to be connected in a two-way manner unlike scenarios in which inheritance is employed.
The fact that execute doesn't use self at all suggests it should be a class method, in which case you can use whichever class is actually provided to instantiate x.
Once you've done this, the definition of Parent no longer relies on any particular subclass; in fact, it doesn't rely on the fact that Parent is subclassed at all; Parent.execute() will continue to work.
For example,
class Parent:
def __init__(self):
print('Im running')
#classmethod
def execute(cls):
x = cls()
x.run()
x.myfun()
def myfun(self):
print('parent function')
def run(self):
print('parent running')
class Child(Parent):
def __init__(self):
super().__init__()
print('Im running too')
def run(self):
print('child running')
Child.execute()
This will output
Im running
Im running too
child running
parent function
Since Child.execute isn't defined, it resolves to Parent.execute. But Child is still the first argument passed. As a result, x will be an instance of Child, not Parent. x.run() thus runs Child.run, but x.myfun() runs Parent.myfun.
The fact that Parent.execute, though, still depends on x having an attribute specific to cls suggests that you should defer restrict execute to using only things defined by Parent, and let a child override execute to add any child-specific behavior.
Or, execute should be an instance method, but it should simply call self.fun, putting the burden on the caller to call execute with an appropriate object.
c = Child()
c.execute()
Related
This is a feature I miss in several languages and wonder if anyone has any idea how it can be done in Python.
The idea is that I have a base class:
class Base(object):
def __init__(self):
self.my_data = 0
def my_rebind_function(self):
pass
and a derived class:
class Child(Base):
def __init__(self):
super().__init__(self)
# Do some stuff here
self.my_rebind_function() # <==== This is the line I want to get rid of
def my_rebind_function(self):
# Do stuff with self.my_data
As can be seen above, I have a rebound function which I want called after the Child.__init__ has done its job. And I want this done for all inherited classes, so it would be great if it was performed by the base class, so I do not have to retype that line in every child class.
It would be nice if the language had a function like __finally__, operating similar to how it operates with exceptions. That is, it should run after all __init__-functions (of all derived classes) have been run, that would be great. So the call order would be something like:
Base1.__init__()
...
BaseN.__init__()
LeafChild.__init__()
LeafChild.__finally__()
BaseN.__finally__()
...
Base1.__finally__()
And then object construction is finished. This is also kind of similar to unit testing with setup, run and teardown functions.
You can do this with a metaclass like that:
class Meta(type):
def __call__(cls, *args, **kwargs):
print("start Meta.__call__")
instance = super().__call__(*args, **kwargs)
instance.my_rebind_function()
print("end Meta.__call__\n")
return instance
class Base(metaclass=Meta):
def __init__(self):
print("Base.__init__()")
self.my_data = 0
def my_rebind_function(self):
pass
class Child(Base):
def __init__(self):
super().__init__()
print("Child.__init__()")
def my_rebind_function(self):
print("Child.my_rebind_function")
# Do stuff with self.my_data
self.my_data = 999
if __name__ == '__main__':
c = Child()
print(c.my_data)
By overwriting Metaclass.__call__ you can hook after all __init__ ( and __new__) methods of the class-tree have run an before the instance is returned. This is the place to call your rebind function. To understand the call order i added some print statements. The output will look like this:
start Meta.__call__
Base.__init__()
Child.__init__()
Child.my_rebind_function
end Meta.__call__
999
If you want to read on and get deeper into details I can recommend following great article: https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/
I may still not fully understand, but this seems to do what I (think) you want:
class Base(object):
def __init__(self):
print("Base.__init__() called")
self.my_data = 0
self.other_stuff()
self.my_rebind_function()
def other_stuff(self):
""" empty """
def my_rebind_function(self):
""" empty """
class Child(Base):
def __init__(self):
super(Child, self).__init__()
def other_stuff(self):
print("In Child.other_stuff() doing other stuff I want done in Child class")
def my_rebind_function(self):
print("In Child.my_rebind_function() doing stuff with self.my_data")
child = Child()
Output:
Base.__init__() called
In Child.other_stuff() doing other stuff I want done in Child class
In Child.my_rebind_function() doing stuff with self.my_data
If you want a "rebind" function to be invoked after each instance of a type which inherits from Base is instantiated, then I would say this "rebind" function can live outside the Base class(or any class inheriting from it).
You can have a factory function that gives you the object you need when you invoke it(for example give_me_a_processed_child_object()). This factory function basically instantiates an object and does something to it before it returns it to you.
Putting logic in __init__ is not a good idea because it obscures logic and intention. When you write kid = Child(), you don't expect many things to happen in the background, especially things that act on the instance of Child that you just created. What you expect is a fresh instance of Child.
A factory function, however, transparently does something to an object and returns it to you. This way you know you're getting an already processed instance.
Finally, you wanted to avoid adding "rebind" methods to your Child classes which you now you can since all that logic can be placed in your factory function.
I have two classes that need to pass data between each other. The first class instantiates the second. The second class needs to be able to pass information back to the first. However I cannot instantiates the ClassOne again from class two. Both classes are running off a shared timer where they poll different things so while they share the timer they cannot share the objects they poll.
My current solution (that works) is to pass a method to ClassTwo and used to send data back up but I feel this might be a bit hack-ey and the wrong way to go about it.
classOne():
def __init__(self,timer):
self.classTwo = classTwo(self.process_alerts,timer)
self.classTwo.start
def process_alerts(alert_msg):
print alert_msg
classTwo():
def __init__(proses_alerts,timer)
self.process_alerts = process_alerts # <----- class ones method
def start(self):
check for alerts:
if alert found:
self.alert(alert_msg)
def alert(self,alert_msg):
self.process_alert(alert_msg) # <----- ClassOnes method
Thank you for your time.
Nothing prevents you from passing the current ClassOne instance (self) to it's own ClassTwo instance:
class ClassOne(object):
def __init__(self):
self.two = ClassTwo(self)
self.two.start()
def process_alert(self, msg):
print msg
class ClassTwo(object):
def __init__(self, parent):
self.parent = parent
def start(self):
while True:
if self.has_message():
self.parent.process_alert(self.get_message())
Note that in this context "parent" means it's a containment relationship ("has a"), it has nothing to do with inheritance ("is a").
If what bugs you is that ClassOne is responsible for instanciating ClassTwo (which indeed introduce a strong coupling), you can either change ClassOne so it takes a factory:
class ClassOne(object):
def __init__(self, factory):
self.other = factory(self)
self.other.start()
# etc
and then pass ClassTwo as the factory:
c1 = ClassOne(ClassTwo)
So you can actually pass anything that returns an object with the right interface (makes unittesting easier)
Or - at least in your (I assume striped down) example - you could just make ClassOne pass itself to ClassTwo.start() and explicitely pass ClassTwo instance to ClassOne, ie:
class ClassOne(object):
def __init__(self, other):
self.other.start(self)
def process_alert(self, msg):
print msg
class ClassTwo(object):
def start(self, parent):
while True:
if self.has_message():
parent.process_alert(self.get_message())
c2 = ClassTwo()
c1 = ClassOne(c2)
Or even simpler remove the call to ClassTwo.start from ClassOne and you don't need any reference to a ClassTwo instance in ClassOne.__init__:
class ClassOne(object):
def process_alert(self, msg):
print msg
class ClassTwo(object):
def start(self, parent):
while True:
if self.has_message():
parent.process_alert(self.get_message())
c1 = ClassOne()
c2 = ClassTwo()
c2.start(c1)
which is as decoupled as it can be but only works if ClassTwo only needs ClassOne instance in start() and methods called from start and ClassOne doesn't need to keep a reference on the ClassTwo instance either.
You could remove/minimize the coupling between the classes! I found this sort of architecture maps really well to sharing data by communicating across a Queue.
By using Queue you can decouple the two classes. The producer (ClassTwo) can check for messages, and publish them to a queue. It no longer needs to know how to correctly instantiate a class or interact with it, it just passes a message.
Then a ClassOne instance could pull messages from the queue, as they become available. This also lends well to scaling each instance independent of each other.
ClassTwo -> publish to queue -> Class One pulls from queue.
This also helps with testing as the two classes are completely isolated, you can provide a Queue to either class.
Queues also usually provide operations that support blocking until message becomes available, so you don't have to manage timeouts.
Python 2.7.6 on Linux.
I'm using a test class that inherits from a parent. The parent class holds a number of fields that are common to many child classes, and I need to call the parent setUp method to initialize the fields. Is calling ParentClass.setUp(self) the correct way to do this? Here's a simple example:
class RESTTest(unittest.TestCase):
def setUp(self):
self.host = host
self.port = port
self.protocol = protocol
self.context = context
class HistoryTest(RESTTest):
def setUp(self):
RESTTest.setUp(self)
self.endpoint = history_endpoint
self.url = "%s://%s:%s/%s/%s" %(self.protocol, self.host, self.port, self.context, self.endpoint)
def testMe(self):
self.assertTrue(True)
if __name__ == '__main__':
unittest.main()
Is this correct? It seems to work.
You would use super for that.
super(ChildClass, self).method(args)
class HistoryTest(RESTTest):
def setUp(self):
super(HistoryTest, self).method(args)
...
In Python 3 you may write:
class HistoryTest(RESTTest):
def setUp(self):
super().method(args)
...
which is simpler.
See this answer:
super() lets you avoid referring to the base class explicitly, which can be nice. But the main advantage comes with multiple inheritance, where all sorts of fun stuff can happen. See the standard docs on super if you haven't already.
Multiple inheritance
To (try to) answer the question in your comment:
How do you specify which super method you want to call?
From what I understand of the philosophy of multiple inheritance (in Python), you don't. I mean, super, along with the Method Resolution Order (MRO) should do things right and select the appropriate methods. (Yes methods is a plural, see below.)
There are a lot of blog posts / SO answers about this you can find with keywords "multiple inheritance", "diamond", "MRO", "super", etc. This article provides a Python 3 example I found surprising and didn't find in other sources:
class A:
def m(self):
print("m of A called")
class B(A):
def m(self):
print("m of B called")
super().m()
class C(A):
def m(self):
print("m of C called")
super().m()
class D(B,C):
def m(self):
print("m of D called")
super().m()
D().m()
m of D called
m of B called
m of C called
m of A called
See? Both B.m() and C.m() are called thanks to super, which seems like the right thing to do considering D inherits from both B and C.
I suggest you play with this example like I just did. Adding a few prints, you'll see that, when calling D().m(), the super().m() statement in class B itself calls C.m(). Whereas, of course, if you call B().m() (B instance, not D instance), only A.m() is called. In other words, super().m() in B is aware of the class of the instance it is dealing with and behaves accordingly.
Using super everywhere sounds like the silver bullet, but you need to make sure all classes in the inheritance schema are cooperative (another keyword to dig for) and don't break the chain, for instance when expecting additional parameters in child classes.
So I have a parent class:
class Parent(Object):
def function(self):
do_something()
And many child classes:
class Child1(Parent):
def function(self):
do_something_else_1()
class Child2(Parent):
def function(self):
do_something_else_2()
...
I would like to ensure that the parent function() is always called before the children's function(), so that every call to function() also calls do_something() no matter the class. Now, I know I can do something like:
class Child1(Parent):
def function(self):
super(Child1, self).function()
do_something_else_1()
class Child2(Parent):
def function(self):
super(Child2, self).function()
do_something_else_2()
...
but I would rather not do that for every child class, because these child classes are being generated on the fly, and because these child classes themselves are being extended further. Instead, I would like to do something that looks like
class Child1(Parent):
#call_parent
def function(self):
do_something_else_1()
class Child2(Parent):
#call_parent
def function(self):
do_something_else_2()
...
And write a decorator to accomplish the same task.
I have two questions:
Is this even a good idea? Am I using decorators and function overriding in their intended way?
How would I go about writing this decorator?
Is this even a good idea? Am I using decorators and function overriding in their intended way?
This question is complex to answer without knowing the details about your system.
Just from the abstract example it looks OK, but replacing the explicit and clear super() call with something like #call_parent is not a good idea.
Everyone knows or can easily find out what super() does and decorator will only cause the confusion.
How would I go about writing this decorator?
Don't write the decorator, instead you can use the template method:
class Parent(Object):
def function(self):
do_something()
do_something_in_child()
def do_something_in_child():
pass
Now in child classes you only override the do_something_in_child, the function stays only in the Parent, so you are sure your do_something() is always called.
class Child1(Parent):
def do_something_in_child(self):
do_something_else_1():
class Child2(Parent):
def do_something_in_child(self):
do_something_else_2():
class Child3(Parent):
# no override here, function() will do the same what it does in Parent
pass
Im not well versed on Python but you could so something like:
# Function in childredn. Overrides parent one.
def function(self):
# child code
super().function() #however it is used
# more child code
If that is not plausible, take a look on template method design pattern.
# Function in parent. Do not override this one
def function(self):
# your parent code
function_do_something()
# more code if you need it
# function in parent. Children overryde this one
def function_do_something():
...
And, you can allways let function_do_something() be void, in order to only execute your father constructor.
consider (python):
assume global functions: default_start(), main_behaviour(), default_end(), custom_start(), and custom_end() just as code filler for illustration purposes.
class Parent:
def on_start_main(self):
default_start()
def main_behaviour(self):
main_behaviour()
def on_end_main(self):
default_end()
def main(self):
self.on_start_main()
self.main_behaviour()
self.on_end_main()
class Child(Parent):
def on_start_main(self):
custom_start()
def on_end_main(self):
custom_end()
vs
class Parent:
def main_behaviour(self):
main_behaviour()
def main(self):
default_start()
self.main_behaviour()
default_end()
class Child(Parent):
def main(self):
custom_start()
Parent.main_behaviour(self)
custom_end()
I don't know which of these is preferable, but I suspect the second. Is this a matter of taste or are there concrete reasons why one is better than the other?
Thanks
I would prefer the first solution for one simple reason:
What if you want in the Child class to override just the on_start_main and leave the on_end_main unchanged.
If you choose the first solution, then you just override the on_start_main method and your done. You don't have to know or even care what the Parent class does in it's on_end_main.
If you choose the second solution you have to know exactly what the Parent class exactly does in the main method, so not only you have to dig into the source of the Parent class but also duplicate the code already written.