I have one class with multiple inheritance. I would like to concat the output from some parents' methods that share the same name. Ideally, I would be able to do this without going through all parent class but selecting explicitly the cases I want.
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return <my_class1.common_method()> + <my_class2.common_method()>
Since you don't want to use the langage mechanisms to call super-methors (which are designed to go through all the methods in the superclasses, even ones that are not known at the time the code is written), just call the methods explitly on the classes you want - by using the class name.
The only thing different that has to be done is that you have to call the method from the class, not from the instance, and then insert the instance manually as first parameter. Python's automatic self reference is only good when calling the method in the most derived sub-class (from which point, in a more common design, it will use super to run its coutnerparts in the superclasses)
For your example to work, you simply have to write it like this:
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return my_class1.common_method(self) + my_class2.common_method(self)
Note, hoever, that if any of the common_methods would call super().common_method in a common ancestor base, that super-method would be run once for each explicit invocation of a sub-class' .common_method.
If you would want to specialize that it would be though to do.
In other words, if you want, a "super" counterpart that would allow you to specify which super-classes to visit when calling the method, and ensure any super-method called by those would run only once - that i feasible, but complicated and error prone. If you can use explicit classes like in this example, it is 100 times simpler.
In my code, I define a class which is derived from another class out of a package which I cannot modify. I want to override a method of that base class which calls itself recursively in the base class. Unfortunately, this recursion now calls the newly defined method of the derived class instead of the original method of the base class. How can I tell python that the base class has to call it's own method and not the derived one without being able to modify this base class?
Example:
class Base(dict):
''' defined in a package, cannot modify anything in this class '''
def func(self, param):
if (not recursion abort condition):
# The next line doesn't call Base.func() if someone overloads it in a derived class.
# How can I achieve that it calls Base.func() every time, regardless of derived classes?
# This class doesn't know that someone derives it at some time, so modifying it is not an option.
compute_something += self.func(some_value)
return stuff
class Derived(Base):
''' My class. Here, I have full access '''
def __init__(self):
self._lock = <init some mutex>
def func(self, param):
# runs into a deadlock here in the second recursion call out of Base.func()
with (self._lock): # protect the method of the base class in some way
return super(Derived, self).func(param)
I've been trying to comprehend python's implementation of OOP.
Essentially I need something which is a superclass that defines some global attributes that al l other classes use as input for their methods. Eg:
This is how i thought it should be done:
class One():
def __init__(self, name):
self.name = name
class Two(One):
def __init__(self, name): # name from class one...
One.__init__(self, name)
def method_using_name_from_one(self, name_from_one):
return name_from_one
I guess that I could do this by just declaring all the methods in class Two as in methods of class one, but I'd much prefer to have them separated. So to recap: I want the parameters for the method in class two to use the attributes declared in class One. So essentially I want to pass in an instantiated object as the parameter arguments for class Two methods.
When you say
class Two(One):
One isn't a parameter of class Two. That means class Two inherits from class One. In other words, unless you override a method, it gets everything class One has. edit: When I say this, I mean parameters and functions, I don't mean an instance of the class. Since you have:
def __init__(self, name): # name from class one...
One.__init__(self, name)
self.name is in class Two. In other words, you could just say...
def method_using_name_from_one(self):
return self.name
One thing I would suggest is changing your class One declaration to:
class One(object):
This means it inherits from object, it doesn't mean it's getting passed an object :)
Is this what you meant? Maybe I didn't understand correctly.
If you want the name parameter from One, you could say
def method_using_name_from_one(self, oneInstance):
return oneInstance.name
This question already has answers here:
Python - calling ancestor methods when multiple inheritance is involved
(2 answers)
Closed 8 years ago.
Consider:
class X:
def some_method(self):
print("X.some_method called")
class Y:
def some_method(self):
print("Y.some_method called")
class Foo(X,Y):
def some_method(self):
super().some_method()
# plus some Foo-specific work to be done here
foo_instance = Foo()
foo_instance.some_method()
Output:
X.some_method called
Switching the class declaration of Foo to instead be:
class Foo(Y,X):
Alters the output to:
Y.some_method called
If I want both ancestor methods to be called I could alter Foo's implementation as:
def some_method(self):
X().some_method()
Y().some_method()
# plus some Foo-specific work to be done here
This leads to my question. Is there any uber secret way to cause Python to invoke the method on all ancestors without me doing so explicitly like the code, such as (I'm making up the all_ancestors keyword here - does such a thing actually exist?):
def some_method(self):
all_ancestors().some_method()
# plus some Foo-specific work to be done here
with an expected output of:
X.some_method called
Y.some_method called
No, there is no secret way to do that. As I mentioned in your other question, the usual way to do this is not to call all ancestor methods from the single descendant class. Instead, each class should use super to call just one ancestor method, namely the next one up the inheritance chain. If every class in the tree does this (except the topmost base class), then all methods will get called in order. In other words, Foo should use super(), which will call X's method; and then X should also use super(), which will call Y's method.
To make this work right, it is usually best to have a single topmost class in the inheritance tree. In your example this would be a class that is the base of both X and Y. You need such a class to serve as a final stop to the sequence of super calling; this base class should not call super. If you just keep calling super everywhere, eventually it will try to call up to the base object class, and then fail because object doesn't provide the method you're trying to call.
If you can provide X & Y with a common base class or mix-in, this should work:
class ISomeMethod:
def some_method(self):
pass
class X(ISomeMethod):
def some_method(self):
print("X.some_method called")
super(X, self).some_method()
class Y(ISomeMethod):
def some_method(self):
print("Y.some_method called")
super(Y, self).some_method()
some_method should then be called in the order which you declare the base classes in Foo.
What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
I don't know Python, but your question seems very general. Ignore me if it's specific to Python.
Class nesting is all about scope. If you think that one class will only make sense in the context of another one, then the former is probably a good candidate to become a nested class.
It is a common pattern make helper classes as private, nested classes.
There is another usage for nested class, when one wants to construct inherited classes whose enhanced functionalities are encapsulated in a specific nested class.
See this example:
class foo:
class bar:
... # functionalities of a specific sub-feature of foo
def __init__(self):
self.a = self.bar()
...
... # other features of foo
class foo2(foo):
class bar(foo.bar):
... # enhanced functionalities for this specific feature
def __init__(self):
foo.__init__(self)
Note that in the constructor of foo, the line self.a = self.bar() will construct a foo.bar when the object being constructed is actually a foo object, and a foo2.bar object when the object being constructed is actually a foo2 object.
If the class bar was defined outside of class foo instead, as well as its inherited version (which would be called bar2 for example), then defining the new class foo2 would be much more painful, because the constuctor of foo2 would need to have its first line replaced by self.a = bar2(), which implies re-writing the whole constructor.
You could be using a class as class generator. Like (in some off the cuff code :)
class gen(object):
class base_1(object): pass
...
class base_n(object): pass
def __init__(self, ...):
...
def mk_cls(self, ..., type):
'''makes a class based on the type passed in, the current state of
the class, and the other inputs to the method'''
I feel like when you need this functionality it will be very clear to you. If you don't need to be doing something similar than it probably isn't a good use case.
There is really no benefit to doing this, except if you are dealing with metaclasses.
the class: suite really isn't what you think it is. It is a weird scope, and it does strange things. It really doesn't even make a class! It is just a way of collecting some variables - the name of the class, the bases, a little dictionary of attributes, and a metaclass.
The name, the dictionary and the bases are all passed to the function that is the metaclass, and then it is assigned to the variable 'name' in the scope where the class: suite was.
What you can gain by messing with metaclasses, and indeed by nesting classes within your stock standard classes, is harder to read code, harder to understand code, and odd errors that are terribly difficult to understand without being intimately familiar with why the 'class' scope is entirely different to any other python scope.
A good use case for this feature is Error/Exception handling, e.g.:
class DownloadManager(object):
class DowndloadException(Exception):
pass
def download(self):
...
Now the one who is reading the code knows all the possible exceptions related to this class.
Either way, defined inside or outside of a class, would work. Here is an employee pay schedule program where the helper class EmpInit is embedded inside the class Employee:
class Employee:
def level(self, j):
return j * 5E3
def __init__(self, name, deg, yrs):
self.name = name
self.deg = deg
self.yrs = yrs
self.empInit = Employee.EmpInit(self.deg, self.level)
self.base = Employee.EmpInit(self.deg, self.level).pay
def pay(self):
if self.deg in self.base:
return self.base[self.deg]() + self.level(self.yrs)
print(f"Degree {self.deg} is not in the database {self.base.keys()}")
return 0
class EmpInit:
def __init__(self, deg, level):
self.level = level
self.j = deg
self.pay = {1: self.t1, 2: self.t2, 3: self.t3}
def t1(self): return self.level(1*self.j)
def t2(self): return self.level(2*self.j)
def t3(self): return self.level(3*self.j)
if __name__ == '__main__':
for loop in range(10):
lst = [item for item in input(f"Enter name, degree and years : ").split(' ')]
e1 = Employee(lst[0], int(lst[1]), int(lst[2]))
print(f'Employee {e1.name} with degree {e1.deg} and years {e1.yrs} is making {e1.pay()} dollars')
print("EmpInit deg {0}\nlevel {1}\npay[deg]: {2}".format(e1.empInit.j, e1.empInit.level, e1.base[e1.empInit.j]))
To define it outside, just un-indent EmpInit and change Employee.EmpInit() to simply EmpInit() as a regular "has-a" composition. However, since Employee is the controller of EmpInit and users don't instantiate or interface with it directly, it makes sense to define it inside as it is not a standalone class. Also note that the instance method level() is designed to be called in both classes here. Hence it can also be conveniently defined as a static method in Employee so that we don't need to pass it into EmpInit, instead just invoke it with Employee.level().