Say I built some classes containing some instance methods:
class A:
def func_of_A(self):
print("foo")
class B:
def func_of_B(self):
print("bar")
How can I construct an object/variable c that is an instance of both A and B, so that I can call both c.func_of_A() and c.func_of_B()?
I could of course build a new class inheriting from A and B and make c a member of that:
class C(A,B):
pass
c = C()
But that is not what I am looking for. I would rather not create a new class every time I am planning to use a combination of already built ones.
I could also create a function to dynamically define a new class and return an instance of it:
def merge(*inherit_from):
class D(*inherit_from):
pass
return D()
c = merge(A,B)
but this is beyond cursed, because now merge(A,B), merge(A) and merge(B) all return the same type <class '__main__.merge.<locals>.D'>.
There should be an intended way to do this, shouldn't?
Is there a solution that scales well with the number of classes involved? If I already have class A1, class A2, ..., class A100 and I want to construct some c to be an instance of class A2, class A23, class A72, class A99 but not the others how would I do that? Creating a new class for every combination is pretty much impossible, given the ~2^100 combinations.
You can use type() for that as #deceze mentioned
>>> class A:
... def a():
... pass
...
>>> class B:
... def b():
... pass
...
>>> def merge(name: str, *parents):
... return type(name, parents, dict())
...
>>> C = merge("C", A, B)
>>> C.a()
>>> C.b()
>>>
Related
I need class with possibility of create other class instances and self instances in class method scope. I have following code:
class A:
#somme stuff
class B:
allowed_children_types = [ #no reason to make self.allowed_children_types
A,
C #first problem, no C know
]
#staticmethod
def foo(self):
#use allowed_children_types to create children objects
class C:
allowed_children_types = [ # no reason to make self.allowed_children_types
A,
B
C # second problem, no C know because type object is not yet created
]
#staticmethod
def foo(self):
#use allowed_children_types to create children objects
I will not create independent factory because it will complicated very simple application logic. I feel that create custom metaclass is usually bad designe.
What should I do to jump over this issue?
You have to have all those names defined before you use them. Something like:
class A:
#somme stuff
class B:
#staticmethod
def foo(self):
#use allowed_children_types to create children objects
class C:
#staticmethod
def foo(self):
#use allowed_children_types to create children objects
B.allowed_children_types = [A, C]
C.allowed_children_types = [A, B, C]
I have the following issue with class inheritance. Given class A, B, C as follows
class A(object):
...
class B(A):
...
class C(A):
...
I want a class D that can either inherit from B or from C, depending on the use case.
Right now, I have solved this issue by having a dynamic class definition:
def D(base_class, parameter1, parameter2):
class D(base_class):
...
return D(parameter1, parameter2)
Is this the proper way to do it, or is there a better way of solving this issue?
Rather than have D both create a class and return an instance, have it just return the class, which you can then use to create multiple instances as necessary.
def make_D(base_class):
class D(base_class):
...
return D
DB = make_D(B)
DC = make_D(C)
d1 = DB(...)
d2 = DC(...)
d3 = DC(...)
At this point, you should consider whether you actually need a factory function to define your subclasses, rather than simply define DB and DC directly.
For example, I want class A and class B to share a common method do_something(), but inside the method, it refers to some_attr which varies based on class A or class B.
class A:
...
some_attr = 1
...
#classmethod
def do_something(cls):
....
cls.some_attr...
...
class B:
...
some_attr = 2
...
#classmethod
def do_something(cls):
....
cls.some_attr...
...
I also want to be able to easily extend the class by just changing the some_attr, without touching the do_something() method.
class C(B):
...
some_attr = 3
...
I don't want C to inherit anything from A, that's why I can't let B inherit A and then let C inherit B.
Is there a better solution than defining an abstract class to store do_something() method and set some_attr to None?
You can have an abstract or base class that holds only things you want in common, e.g.:
class Base(object):
some_attr = None
#classmethod
def do_something(cls):
cls.some_attr...
And inherit from that.
I know that in Python, given a class ClassA, with
inspect.getmembers(ClassA, predicate=inspect.ismethod)
I can iterate over the different methods present in ClassA. Inherited methods are also gathered, which is convenient in my case. But what I would need is, given a particular method method1 of ClassA, to get the class from which ClassA inherited method1. It might be ClassA itself, or any of its parents/grandparents. I thought I could recursively traverse the __bases__ attribute, looking for the method1 attribute at each step. But maybe this functionality is already implemented somewhere. Is there another way?
Look through the MRO (Method Resolution Order), using inspect.getmro() (which works on both old and new-style classes):
def class_for_method(cls, method):
return next((c for c in inspect.getmro(cls)
if method.__func__ in vars(c).values()), None)
There is currently no stdlib method to do this search for you, no.
Demo:
>>> import inspect
>>> def class_for_method(cls, method):
... return next((c for c in inspect.getmro(cls)
... if method.__func__ in vars(c).values()), None)
...
>>> class Base1(object):
... def foo(self): pass
...
>>> class Base2(object):
... pass
...
>>> class ClassA(Base1, Base2):
... pass
...
>>> class_for_method(ClassA, ClassA.foo)
<class '__main__.Base1'>
If no base class is found, the above expression returns None:
>>> class Bar:
... def spam(): pass
...
>>> class_for_method(ClassA, Bar.spam) is None
True
Lets say I have a library function that I cannot change that produces an object of class A, and I have created a class B that inherits from A.
What is the most straightforward way of using the library function to produce an object of class B?
edit- I was asked in a comment for more detail, so here goes:
PyTables is a package that handles hierarchical datasets in python. The bit I use most is its ability to manage data that is partially on disk. It provides an 'Array' type which only comes with extended slicing, but I need to select arbitrary rows. Numpy offers this capability - you can select by providing a boolean array of the same length as the array you are selecting from. Therefore, I wanted to subclass Array to add this new functionality.
In a more abstract sense this is a problem I have considered before. The usual solution is as has already been suggested- Have a constructor for B that takes an A and additional arguments, and then pulls out the relevant bits of A to insert into B. As it seemed like a fairly basic problem, I asked to question to see if there were any standard solutions I wasn't aware of.
This can be done if the initializer of the subclass can handle it, or you write an explicit upgrader. Here is an example:
class A(object):
def __init__(self):
self.x = 1
class B(A):
def __init__(self):
super(B, self).__init__()
self._init_B()
def _init_B(self):
self.x += 1
a = A()
b = a
b.__class__ = B
b._init_B()
assert b.x == 2
Since the library function returns an A, you can't make it return a B without changing it.
One thing you can do is write a function to take the fields of the A instance and copy them over into a new B instance:
class A: # defined by the library
def __init__(self, field):
self.field = field
class B(A): # your fancy new class
def __init__(self, field, field2):
self.field = field
self.field2 = field2 # B has some fancy extra stuff
def b_from_a(a_instance, field2):
"""Given an instance of A, return a new instance of B."""
return B(a_instance.field, field2)
a = A("spam") # this could be your A instance from the library
b = b_from_a(a, "ham") # make a new B which has the data from a
print b.field, b.field2 # prints "spam ham"
Edit: depending on your situation, composition instead of inheritance could be a good bet; that is your B class could just contain an instance of A instead of inheriting:
class B2: # doesn't have to inherit from A
def __init__(self, a, field2):
self._a = a # using composition instead
self.field2 = field2
#property
def field(self): # pass accesses to a
return self._a.field
# could provide setter, deleter, etc
a = A("spam")
b = B2(a, "ham")
print b.field, b.field2 # prints "spam ham"
you can actually change the .__class__ attribute of the object if you know what you're doing:
In [1]: class A(object):
...: def foo(self):
...: return "foo"
...:
In [2]: class B(object):
...: def foo(self):
...: return "bar"
...:
In [3]: a = A()
In [4]: a.foo()
Out[4]: 'foo'
In [5]: a.__class__
Out[5]: __main__.A
In [6]: a.__class__ = B
In [7]: a.foo()
Out[7]: 'bar'
Monkeypatch the library?
For example,
import other_library
other_library.function_or_class_to_replace = new_function
Poof, it returns whatever you want it to return.
Monkeypatch A.new to return an instance of B?
After you call obj = A(), change the result so obj.class = B?
Depending on use case, you can now hack a dataclass to arguably make the composition solution a little cleaner:
from dataclasses import dataclass, fields
#dataclass
class B:
field: int # Only adds 1 line per field instead of a whole #property method
#classmethod
def from_A(cls, a):
return cls(**{
f.name: getattr(a, f.name)
for f in fields(A)
})