I have two classes that need to pass data between each other. The first class instantiates the second. The second class needs to be able to pass information back to the first. However I cannot instantiates the ClassOne again from class two. Both classes are running off a shared timer where they poll different things so while they share the timer they cannot share the objects they poll.
My current solution (that works) is to pass a method to ClassTwo and used to send data back up but I feel this might be a bit hack-ey and the wrong way to go about it.
classOne():
def __init__(self,timer):
self.classTwo = classTwo(self.process_alerts,timer)
self.classTwo.start
def process_alerts(alert_msg):
print alert_msg
classTwo():
def __init__(proses_alerts,timer)
self.process_alerts = process_alerts # <----- class ones method
def start(self):
check for alerts:
if alert found:
self.alert(alert_msg)
def alert(self,alert_msg):
self.process_alert(alert_msg) # <----- ClassOnes method
Thank you for your time.
Nothing prevents you from passing the current ClassOne instance (self) to it's own ClassTwo instance:
class ClassOne(object):
def __init__(self):
self.two = ClassTwo(self)
self.two.start()
def process_alert(self, msg):
print msg
class ClassTwo(object):
def __init__(self, parent):
self.parent = parent
def start(self):
while True:
if self.has_message():
self.parent.process_alert(self.get_message())
Note that in this context "parent" means it's a containment relationship ("has a"), it has nothing to do with inheritance ("is a").
If what bugs you is that ClassOne is responsible for instanciating ClassTwo (which indeed introduce a strong coupling), you can either change ClassOne so it takes a factory:
class ClassOne(object):
def __init__(self, factory):
self.other = factory(self)
self.other.start()
# etc
and then pass ClassTwo as the factory:
c1 = ClassOne(ClassTwo)
So you can actually pass anything that returns an object with the right interface (makes unittesting easier)
Or - at least in your (I assume striped down) example - you could just make ClassOne pass itself to ClassTwo.start() and explicitely pass ClassTwo instance to ClassOne, ie:
class ClassOne(object):
def __init__(self, other):
self.other.start(self)
def process_alert(self, msg):
print msg
class ClassTwo(object):
def start(self, parent):
while True:
if self.has_message():
parent.process_alert(self.get_message())
c2 = ClassTwo()
c1 = ClassOne(c2)
Or even simpler remove the call to ClassTwo.start from ClassOne and you don't need any reference to a ClassTwo instance in ClassOne.__init__:
class ClassOne(object):
def process_alert(self, msg):
print msg
class ClassTwo(object):
def start(self, parent):
while True:
if self.has_message():
parent.process_alert(self.get_message())
c1 = ClassOne()
c2 = ClassTwo()
c2.start(c1)
which is as decoupled as it can be but only works if ClassTwo only needs ClassOne instance in start() and methods called from start and ClassOne doesn't need to keep a reference on the ClassTwo instance either.
You could remove/minimize the coupling between the classes! I found this sort of architecture maps really well to sharing data by communicating across a Queue.
By using Queue you can decouple the two classes. The producer (ClassTwo) can check for messages, and publish them to a queue. It no longer needs to know how to correctly instantiate a class or interact with it, it just passes a message.
Then a ClassOne instance could pull messages from the queue, as they become available. This also lends well to scaling each instance independent of each other.
ClassTwo -> publish to queue -> Class One pulls from queue.
This also helps with testing as the two classes are completely isolated, you can provide a Queue to either class.
Queues also usually provide operations that support blocking until message becomes available, so you don't have to manage timeouts.
Related
I have a base class whose method uses a with statement. In a child class, I override the same method, and would like to then access that same with statement (instead of having two with statements).
What are the standard ways of solving this problem?
For an example and possible solution, please see below.
Sample using threading.Lock
from threading import Lock
class BaseClass:
def __init__(self):
self.lock = Lock()
self._data = 0
def do_something_locked(self) -> None:
with self.lock:
self._data += 5
class ChildClass(BaseClass):
def do_something_locked(self) -> None:
super().do_something_locked()
# Obviously the parent class's self.lock's __exit__ method has
# already been called. What are accepted methods to add more
# functionality inside parent class's "with" statement?
with self.lock:
self._data += 1
Possible Solution
My first inclination is to define a private method in the BaseClass like so:
def do_something_locked(self) -> None:
with self.lock:
self._do_something()
def _do_something(self) -> None:
self._data += 5
And then the ChildClass can just override _do_something. This will work fine.
I am wondering, are there any other common patterns of solving this problem?
My first inclination is to define a private method in the BaseClass like so... And then the ChildClass can just override _do_something. This will work fine.
This is a good approach to the problem, even when you don't have a special requirement (like needing to remain within a with block context). I would not use a leading underscore for the "hook" method name, because anything that you are expecting to be overridden in derived classes, is logically part of the class interface. Also, if the self._data += 5 part always needs to happen, then leave it in do_something_locked.
are there any other common patterns of solving this problem?
Specific to the problem, you could use a re-entrant lock as shown in the other answer. You could also ignore the fact that the classes are related, and use dependency injection - make a general method in the base class that accepts a callable and executes it, using the lock:
# in base class
def do_locked(self, what, *args, **kwargs):
with self.lock:
what(*args, **kwargs)
# in derived class
def _implementation(self):
pass
def do_interesting_thing(self):
# pass in our own bound method, which takes no arguments
self._do_locked(self._implementation)
This way allows for client code to make use of the lock in custom ways. It's probably not a great idea if you don't need or want that functionality.
Use a re-entrant Lock. This will automatically "connect" the nested with statements, releasing the lock only after the outer-most with.
from threading import RLock
class BaseClass:
def __init__(self):
self.lock = RLock()
self._data = 0
def do_something_locked(self) -> None:
with self.lock:
self._data += 5
class ChildClass(BaseClass):
def do_something_locked(self) -> None:
with self.lock:
super().do_something_locked()
self._data += 1
In general, the pattern of reentrant context managers exists explicitly to allow possibly-nested contexts.
These context managers can not only be used in multiple with statements, but may also be used inside a with statement that is already using the same context manager.
is it a good design to create object of child class in parent like the example below, it seems to be working but is it a good design, is there a better way to do that?
class parent(object):
def __init__(self):
print('Im running')
def execute(self):
x = child()
x.run()
x.myfun()
def myfun(self):
print('parent function')
def run(self):
print('parent running')
class child(parent):
def __init__(self):
super().__init__()
print('Im running too')
def run(self):
print('child running')
f = parent()
f.execute()
This is definitely not a good design for your problem, and not a good design generally (bar exceptions which I cannot think of any), and is definitely against OOP design and SOLID principles.
Simply in OOP design, or any other software engineering frame of mind, you want clear relations. This makes the relationship between your parent class and your child class inherently more complex. Not to mention most of the other languages (at least languages which run complied code) would not allow such thing to happen.
If you need to have an instance of one in the other and vice versa, maybe inheritance was the wrong pattern to begin with, since your classes seem to be connected in a two-way manner unlike scenarios in which inheritance is employed.
The fact that execute doesn't use self at all suggests it should be a class method, in which case you can use whichever class is actually provided to instantiate x.
Once you've done this, the definition of Parent no longer relies on any particular subclass; in fact, it doesn't rely on the fact that Parent is subclassed at all; Parent.execute() will continue to work.
For example,
class Parent:
def __init__(self):
print('Im running')
#classmethod
def execute(cls):
x = cls()
x.run()
x.myfun()
def myfun(self):
print('parent function')
def run(self):
print('parent running')
class Child(Parent):
def __init__(self):
super().__init__()
print('Im running too')
def run(self):
print('child running')
Child.execute()
This will output
Im running
Im running too
child running
parent function
Since Child.execute isn't defined, it resolves to Parent.execute. But Child is still the first argument passed. As a result, x will be an instance of Child, not Parent. x.run() thus runs Child.run, but x.myfun() runs Parent.myfun.
The fact that Parent.execute, though, still depends on x having an attribute specific to cls suggests that you should defer restrict execute to using only things defined by Parent, and let a child override execute to add any child-specific behavior.
Or, execute should be an instance method, but it should simply call self.fun, putting the burden on the caller to call execute with an appropriate object.
c = Child()
c.execute()
This is a feature I miss in several languages and wonder if anyone has any idea how it can be done in Python.
The idea is that I have a base class:
class Base(object):
def __init__(self):
self.my_data = 0
def my_rebind_function(self):
pass
and a derived class:
class Child(Base):
def __init__(self):
super().__init__(self)
# Do some stuff here
self.my_rebind_function() # <==== This is the line I want to get rid of
def my_rebind_function(self):
# Do stuff with self.my_data
As can be seen above, I have a rebound function which I want called after the Child.__init__ has done its job. And I want this done for all inherited classes, so it would be great if it was performed by the base class, so I do not have to retype that line in every child class.
It would be nice if the language had a function like __finally__, operating similar to how it operates with exceptions. That is, it should run after all __init__-functions (of all derived classes) have been run, that would be great. So the call order would be something like:
Base1.__init__()
...
BaseN.__init__()
LeafChild.__init__()
LeafChild.__finally__()
BaseN.__finally__()
...
Base1.__finally__()
And then object construction is finished. This is also kind of similar to unit testing with setup, run and teardown functions.
You can do this with a metaclass like that:
class Meta(type):
def __call__(cls, *args, **kwargs):
print("start Meta.__call__")
instance = super().__call__(*args, **kwargs)
instance.my_rebind_function()
print("end Meta.__call__\n")
return instance
class Base(metaclass=Meta):
def __init__(self):
print("Base.__init__()")
self.my_data = 0
def my_rebind_function(self):
pass
class Child(Base):
def __init__(self):
super().__init__()
print("Child.__init__()")
def my_rebind_function(self):
print("Child.my_rebind_function")
# Do stuff with self.my_data
self.my_data = 999
if __name__ == '__main__':
c = Child()
print(c.my_data)
By overwriting Metaclass.__call__ you can hook after all __init__ ( and __new__) methods of the class-tree have run an before the instance is returned. This is the place to call your rebind function. To understand the call order i added some print statements. The output will look like this:
start Meta.__call__
Base.__init__()
Child.__init__()
Child.my_rebind_function
end Meta.__call__
999
If you want to read on and get deeper into details I can recommend following great article: https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/
I may still not fully understand, but this seems to do what I (think) you want:
class Base(object):
def __init__(self):
print("Base.__init__() called")
self.my_data = 0
self.other_stuff()
self.my_rebind_function()
def other_stuff(self):
""" empty """
def my_rebind_function(self):
""" empty """
class Child(Base):
def __init__(self):
super(Child, self).__init__()
def other_stuff(self):
print("In Child.other_stuff() doing other stuff I want done in Child class")
def my_rebind_function(self):
print("In Child.my_rebind_function() doing stuff with self.my_data")
child = Child()
Output:
Base.__init__() called
In Child.other_stuff() doing other stuff I want done in Child class
In Child.my_rebind_function() doing stuff with self.my_data
If you want a "rebind" function to be invoked after each instance of a type which inherits from Base is instantiated, then I would say this "rebind" function can live outside the Base class(or any class inheriting from it).
You can have a factory function that gives you the object you need when you invoke it(for example give_me_a_processed_child_object()). This factory function basically instantiates an object and does something to it before it returns it to you.
Putting logic in __init__ is not a good idea because it obscures logic and intention. When you write kid = Child(), you don't expect many things to happen in the background, especially things that act on the instance of Child that you just created. What you expect is a fresh instance of Child.
A factory function, however, transparently does something to an object and returns it to you. This way you know you're getting an already processed instance.
Finally, you wanted to avoid adding "rebind" methods to your Child classes which you now you can since all that logic can be placed in your factory function.
I'm working on a module that I'm hoping to be somewhat dynamic, in that anyone can add features relatively easily.
The basic idea is to have a class, CriticBase, which handles all criticisms for this deployment. The critics would be any class that has inherited from CriticBase.
Pseudo Example:
class CriticBase(Object) {
def self.Execute():
for critic in self.__subclasses__: critic.run()
}
class DatabaseCritic(CriticBase) { def run( //things ) }
class DiskSpaceCritic(CriticBase) { def run( //things ) }
etc...
def DoWork():
Controller = CriticBase()
a = DatabaseCritic()
b = DiskSpaceCritic()
...
Controller.Execute()
I hope that kind of makes sense. Basically the idea is to have a framework that's fairly straightforward for other devs to add to. All you need to do is define some subclass of CriticBase, and everything else is handled for you by the critic framework.
However, it's pretty ugly to me to just assign these classes to something that's never going to be used. Is there such a thing as lingering objects in Python? Could I do away with the assignment, and still have the reference to the instantiated class from the base class? Or do I have to have it assigned to something, otherwise it will be garbage collected?
My understanding is that you don't want other devs to instantiate the sub-classes. Actually, they don't need to do that, as long as the method run() is a class method:
# The framework provides this base class
class CriticBase(object):
#classmethod
def execute(cls):
for critic_class in cls.__subclasses__():
critic_class.run()
# Devs only need to provide a definition of subclass
class DatabaseCritic(CriticBase):
#classmethod
def run(cls):
# do something specific about database
class DiskSpaceCritic(CriticBase):
#classmethod
def run(cls):
# do something specific about disk space
# now the base class do all the work
def DoWork():
CriticBase.execute()
With this approach, you use python's inheritance machinery to collect the subclasses into a list, and your code is free from useless instantiations.
Well, you could achieve this using a Publish and Subscribe pattern. Roughly, it could be:
class CriticServer:
def __init__(self):
self.clients = []
def insert_client(self, client):
self.clients.append(client)
def execute(self):
for client in self.clients:
client.run()
class CriticClient:
# move what you would inherit from server to here
def __init__(self, server):
server.insert_client(self)
class DatabaseCriticClient(CriticClient):
def run(self):
pass
class DiskSpaceCriticClient(CriticClient):
def run(self):
pass
def main():
server = CriticServer()
DiskSpaceCriticClient(server)
DatabaseCriticClient(server)
server.execute()
Since I don't know too much details about your project, I'm tempt to say that would be a better idea to create a base class for clients instead of sub-classing the server.
Maybe it has not too much magic, but it works nice and it is easy to understand and to extend (which is sometimes better than pure magic).
I am trying to implement thread safe code but encounter some simple problem. I searched and not found solution.
Let me show abstract code to describe problem:
import threading
class A(object):
sharedLock = threading.Lock()
shared = 0
#classmethod
def getIncremented(cls):
with cls.sharedLock:
cls.shared += 1
return cls.shared
class B(A):
pass
class C(A):
#classmethod
def getIncremented(cls):
with cls.sharedLock:
cls.shared += B.getIncremented()
return cls.shared
I want to define class A to inherit many child classes for example for enumerations or lazy variables - specific use no matter. I am already done single thread version now want update multi thread.
This code will give such results as should do:
id(A.sharedLock) 11694384
id(B.sharedLock) 11694384
id(C.sharedLock) 11694384
I means that lock in class A is lock in class B so it is bad since first entry into class B will lock also class A and class C. If C will use B it will lead to dedlock.
I can use RLock but it is invalid programming pattern and not sure if it not produce more serious deadlock.
How can I change sharedLock value during initialization of class to new lock to make id(A.sharedLock) != id(B.sharedLock) and same for A and C and B and C?
How can I hook class initialization in python in generic to change some class variables?
That question is not too complex but I do not know what to do with it.
I want inherit parent share variables except shared parent locks
You must not do this. It makes access to "share variables" not thread-safe.
sharedLock protects shared variable. If the same shared variable can be modified in a recursive call then you need RLock(). Here shared means shared among all subclasses.
It looks like you want a standalone function (or a static method) instead of the classmethod:
def getIncremented(_lock=Lock(), _shared=[0]):
with _lock:
_shared[0] += 1
return _shared[0]
Thus all classes use the same shared variable (and the corresponding lock).
If you want each class to have its own shared variable (here shared means shared among instances of this particular class) then don't use cls.shared that may traverse ancestors to get it.
To hint that subclasses shouldn't use a variable directly, you could use the syntax for a private variable:
class A:
__shared = 0
__lock = Lock()
If a subclass overrides a method that uses __shared then it won't use A.__shared by accident in the code directly.
As you noticed, if you expose shared locks as class attributes, the locks are shared by subclasses.
You could hack around this by redefining the lock on each subclass:
class B(A):
sharedLock = threading.Lock()
You could even use metaclasses to achieve this (please don't). It seems to me that you're approaching the program from the wrong angle.
This task is easier if you assign locks explicitly to instances (not classes).
class A(object):
def __init__(self, lock):
this.sharedLock= lock
my_lock= threading.Lock()
a= A(my_lock)
Of course, you run into the "problem" of having to explicitly pass the lock for each instance. This is traditionally solved using a factory pattern, but in python you can simply use functions properly:
from functools import partial
A_with_mylock= partial(A, my_lock)
a2= A_with_mylock()
Here is solution - this allow separate lock per each class since it done on class constructor level (metaclass). Thank you for all hints and help to achieve this code it looks very nice.
I can be also mangled variable but need to use hardcode '_A__lock' what can be problematic and not tested by me.
import threading
class MetaA(type):
def __new__(self, name, bases, clsDict):
# change <type> behavior
clsDict['_lock'] = threading.Lock()
return super(MetaA, self).__new__(self, name, bases, clsDict)
class A(object):
__metaclass__ = MetaA
#classmethod
def getLock(cls):
return cls._lock
class B(A):
pass
print 'id(A.getLock())', id(A.getLock())
print 'id(B.getLock())', id(B.getLock())
print A.getLock() == B.getLock()