Python - Choose which class to inherit - python

I want to make two classes A and B, in which B is a slight - but significant - variation of A, and then make a third class C that can inherit either A or B and add functionality to them. The problem is, how do I tell C to inherit A or B based on my preference?
To make things more clear, suppose I have this code:
class A:
def __init__(self, x, y):
self.x = x
self.y = y
def first(self):
return do_something(1)
def second(self):
return do_something(2)
def third(self):
return do_something(3)
def start(self):
self.first()
self.second()
self.third()
class B(A):
def __init__(self, x, y, z):
super().__init__(x, y)
self.z = z
def second(self):
super().second()
do_stuff()
def third(self):
do_other_stuff()
That is a very simplified version of the code I used. In particular, A represents a simulator of a manufacturing system, while B represents a simulator of the same manufacturing system with a modification of the behaviour of the main machine-tool.
Now, what I want is to add code to compute some statistics. What it does is something like this:
class C(A):
def __init__(self, *args):
super().__init__(*args)
self.stat = 0
def second(self):
super().second()
self.stat += 1
def third(self):
super().third()
self.stat *= 3
The problem is that the class C works the exactly same way whether if I inherit class A (as in the previous listing) or class B (exact same code, with as first line class C(B):
How can I do that? Or am I using a non-feasible way? I think an ideal solution is to be able to choose which class to inherit, A or B, when I initialize C. Or, maybe, to be able to pass to class C the class to inherit.
I made some researches, and I found also the possibility of aggregation (that I didn't know before), but I don't see it really useful. As a last note, be aware that class A might have up to 20-30 methods, and when I use class C I want class A (or B, depending on which it inherits) to work exactly as before with the added chunks of C inbetween.
P.S. I'm looking for a possibly elegant, no code-heavy, "pythonic" way of doing this. I'm also really looking forward on advices on everything you think could be done better. Finally, I can totally modify class C, but class A and B must remain (apart from small changes) the same.

You can use new-style classes and their method resolution order.
Considering these definitions:
class A(object):
def __init__(self, x):
pass
def foo(self):
print "A"
class B(object):
def __init__(self, x, y):
pass
def foo(self):
print "B"
you can build a mixin intended to add functionality to A or B:
class Cmix(object):
def foo(self):
super(Cmix, self).foo()
print "mix"
and inherit from both Cmix and A (or B, respectively):
class CA(Cmix, A):
pass
class CB(Cmix, B):
pass
Finally, you can write a convenience function to choose between CA and CB based on the number of parameters:
def C(*args):
if len(args) == 1:
return CA(*args)
else:
return CB(*args)
Now we have
C(1).foo()
# A
# mix
C(1, 2).foo()
# B
# mix
Note that C is not a real class and you cannot use it as a second argument in isinstance, for example.

Related

Am I able to call a submethod of a class's attribute from that class using the class as an attribute?

I am very sorry for the confusing title, I did not know how else to phrase the question.
Let's say I have a class, A. It is described as shown:
class A:
def __init__(self, argument):
self.value = argument
def submethod(self, argumentThatWillBeAClass):
print(dir(argumentThatWillBeAClass))
And then I initialize it as shown below:
classAInstance = A('42.0')
Now, I have a class, B. Let's add a submethod that calls A's submethod with B as an argument.
class B:
def __init__(self, argumentThatIsAClassAInstance):
self.classAInstance = argumentThatIsAClassAInstance
def submethod(self):
self.classAInstance.submethod(self)
Let's initialize it with classInstance:
classBInstance = B(classAInstance)
My desired result is that all the attributes of B are printed when B.submethod is called. Is this possible, and if not, how would I achieve something like this?
Now, I have a class, B. Let's add a submethod that calls A's submethod
with B as an argument.
But that isn't what your code does. On the following line:
self.classAInstance.submethod(self)
You are calling the method (I don't know what you mean by "sub" method, these are all just normal methods) with *an instance of B, not B.
Two different ways you could do this:
self.classAInstance.submethod(type(self))
Or:
self.classAInstance.submethod(B)
The semantics aren't exactly the same, since the first dynamically retreives the instance, if some other class inherits from B, it will call dir on that class. The second always prints dir(B), regardless of inheritance.
So:
class A:
def method(self, klass: type) -> None:
print(dir(klass))
class B:
def __init__(self, a: A) -> None:
self.a = a
def method(self) -> None:
self.a.method(type(self))
b = B(A())
As one potential solution, you can use inheritance. This allows class B to inherit everything from class A
class A:
def __init__(self, argument):
self.value = argument
def submethod(self, argumentThatWillBeAClass):
print(dir(argumentThatWillBeAClass))
class B(A):
def __init__(self, value):
super().__init__(value)
def submethod(self, argumentThatWillBeAClass): # You can override the method and do extra code too.
super().submethod(argumentThatWillBeAClass) # Calls A's submethod function

Using classmethods to implement alternative constructors, how can I add functions to those alternative constructors?

I have an class that can be constructed via alternative constructors using class methods.
class A:
def __init__(self, a, b):
self.a = a
self.b = b
#classmethod
def empty(cls, b):
return cls( 0 , b)
So let's say instead of constructing A like A() I can now also do A.empty().
For user convenience, I would like to extend this empty method even further, so that I can initialize A via A.empty() as well as the more specialized but closely-related A.empty.typeI() and A.empty.typeII().
My naive approach did not quite do what I wanted:
class A:
def __init__(self, a, b):
self.a = a
self.b = b
#classmethod
def empty(cls, b):
def TypeI(b):
return cls( 0 , b-1)
def TypeII(b):
return cls( 0 , b-2)
return cls( 0 , b)
Can anyone tell me how that could be done (or at least convince me why that would be terrible idea). I want to stress that for usage I imagine such an approach to be very convenient and clear for the users as the functions are grouped intuitively.
You can implement what you want by making Empty a nested class of A rather than a class method. More than anything else this provides a convenient namespace — instances of it are never created — in which to place various alternative constructors and can easily be extended.
class A(object):
def __init__(self, a, b):
self.a = a
self.b = b
def __repr__(self):
return 'A({}, {})'.format(self.a, self.b)
class Empty(object): # nested class
def __new__(cls, b):
return A(0, b) # ignore cls & return instance of enclosing class
#staticmethod
def TypeI(b):
return A(0, b-1)
#staticmethod
def TypeII(b):
return A(0, b-2)
a = A(1, 1)
print('a: {}'.format(a)) # --> a: A(1, 1)
b = A.Empty(2)
print('b: {}'.format(b)) # --> b: A(0, 2)
bi = A.Empty.TypeI(4)
print('bi: {}'.format(bi)) # --> bi: A(0, 3)
bii = A.Empty.TypeII(6)
print('bii: {}'.format(bii)) # --> bii: A(0, 4)
You can’t really do that because A.empty.something would require the underlying method object to be bound to the type, so you can actually call it. And Python simply won’t do that because the type’s member is empty, not TypeI.
So what you would need to do is to have some object empty (for example a SimpleNamespace) in your type which returns bound classmethods. The problem is that we cannot yet access the type as we define it with the class structure. So we cannot access its members to set up such an object. Instead, we would have to do it afterwards:
class A:
def __init__ (self, a, b):
self.a = a
self.b = b
#classmethod
def _empty_a (cls, b):
return cls(1, b)
#classmethod
def _empty_b (cls, b):
return cls(2, b)
A.empty = SimpleNamespace(a = A._empty_a, b = A._empty_b)
Now, you can access that member’s items and get bound methods:
>>> A.empty.a
<bound method type._empty_a of <class '__main__.A'>>
>>> A.empty.a('foo').a
1
Of course, that isn’t really that pretty. Ideally, we want to set this up when we define the type. We could use meta classes for this but we can actually solve this easily using a class decorator. For example this one:
def delegateMember (name, members):
def classDecorator (cls):
mapping = { m: getattr(cls, '_' + m) for m in members }
setattr(cls, name, SimpleNamespace(**mapping))
return cls
return classDecorator
#delegateMember('empty', ['empty_a', 'empty_b'])
class A:
def __init__ (self, a, b):
self.a = a
self.b = b
#classmethod
def _empty_a (cls, b):
return cls(1, b)
#classmethod
def _empty_b (cls, b):
return cls(2, b)
And magically, it works:
>>> A.empty.empty_a
<bound method type._empty_a of <class '__main__.A'>>
Now that we got it working somehow, of course we should discuss whether this is actually something you want to do. My opinion is that you shouldn’t. You can already see from the effort it took that this isn’t something that’s usually done in Python. And that’s already a good sign that you shouldn’t do it. Explicit is better than implicit, so it’s probably a better idea to just expect your users to type the full name of the class method. My example above was of course structured in a way that A.empty.empty_a would have been longer than just a A.empty_a. But even with your name, there isn’t a reason why it couldn’t be just an underscore instead of a dot.
And also, you can simply add multiple default paths inside a single method. Provide default argument values, or use sensible fallbacks, and you probably don’t need many class methods to create alternative versions of your type.
It is generally better to have uniform class interfaces, meaning the different usages should be consistent with each other. I consider A.empty() and A.empty.type1() to be inconsistent with each other, because the prefix A.empty ituitively means different things in each of them.
A better interface would be:
class A:
#classmethod
def empty_default(cls, ...): ...
#classmethod
def empty_type1(cls, ...): ...
#classmethod
def empty_type2(cls, ...): ...
Or:
class A:
#classmethod
def empty(cls, empty_type, ...): ...
Here's an enhanced implementation of my other answer that makes it — as one commenter put it — "play well with inheritance". You may not need this, but others doing something similar might.
It accomplishes this by using a metaclass to dynamically create and add an nested Empty class similar to that shown in the other answer. The main difference is that the default Empty class in derived classes will now return Derived instances instead of instances of A, the base class.
Derived classes can override this default behavior by defining their own nested Empty class (it can even be derived from the one in the one in the base class). Also note that for Python 3, metaclasses are specified using different syntax:
class A(object, metaclass=MyMetaClass):
Here's the revised implementation using Python 2 metaclass syntax:
class MyMetaClass(type):
def __new__(metaclass, name, bases, classdict):
# create the class normally
MyClass = super(MyMetaClass, metaclass).__new__(metaclass, name, bases,
classdict)
# add a default nested Empty class if one wasn't defined
if 'Empty' not in classdict:
class Empty(object):
def __new__(cls, b):
return MyClass(0, b)
#staticmethod
def TypeI(b):
return MyClass(0, b-1)
#staticmethod
def TypeII(b):
return MyClass(0, b-2)
setattr(MyClass, 'Empty', Empty)
return MyClass
class A(object):
__metaclass__ = MyMetaClass
def __init__(self, a, b):
self.a = a
self.b = b
def __repr__(self):
return '{}({}, {})'.format(self.__class__.__name__, self.a, self.b)
a = A(1, 1)
print('a: {}'.format(a)) # --> a: A(1, 1)
b = A.Empty(2)
print('b: {}'.format(b)) # --> b: A(0, 2)
bi = A.Empty.TypeI(4)
print('bi: {}'.format(bi)) # --> bi: A(0, 3)
bii = A.Empty.TypeII(6)
print('bii: {}'.format(bii)) # --> bii: A(0, 4)
With the above, you can now do something like this:
class Derived(A):
pass # inherits everything, except it will get a custom Empty
d = Derived(1, 2)
print('d: {}'.format(d)) # --> d: Derived(1, 2)
e = Derived.Empty(3)
print('e: {}'.format(e)) # --> e: Derived(0, 3)
ei = Derived.Empty.TypeI(5)
print('ei: {}'.format(ei)) # --> ei: Derived(0, 4)
eii = Derived.Empty.TypeII(7)
print('eii: {}'.format(eii)) # --> eii: Derived(0, 5)

Method Inheritance in Python

I have a parent class and two child class. The parent class is an abstract base class that has method combine that gets inherited by the child classes. But each child implements combine differently from a parameter perspective therefore each of their own methods take different number of parameters. In Python, when a child inherits a method and requires re-implementing it, that newly re-implemented method must match parameter by parameter. Is there a way around this? I.e. the inherited method can have dynamic parameter composition?
This code demonstrates that signature of overridden method can easily change.
class Parent(object):
def foo(self, number):
for _ in range(number):
print "Hello from parent"
class Child(Parent):
def foo(self, number, greeting):
for _ in range(number):
print greeting
class GrandChild(Child):
def foo(self):
super(GrandChild,self).foo(1, "hey")
p = Parent()
p.foo(3)
c = Child()
c.foo(2, "Hi")
g = GrandChild()
g.foo()
As the other answer demonstrates for plain classes, the signature of an overridden inherited method can be different in the child than in the parent.
The same is true even if the parent is an abstract base class:
import abc
class Foo:
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def bar(self, x, y):
return x + y
class ChildFoo(Foo):
def bar(self, x):
return super(self.__class__, self).bar(x, 3)
class DumbFoo(Foo):
def bar(self):
return "derp derp derp"
cf = ChildFoo()
print cf.bar(5)
df = DumbFoo()
print df.bar()
Inappropriately complicated detour
It is an interesting exercise in Python metaclasses to try to restrict the ability to override methods, such that their argument signature must match that of the base class. Here is an attempt.
Note: I'm not endorsing this as a good engineering idea, and I did not spend time tying up loose ends so there are likely little caveats about the code below that could make it more efficient or something.
import types
import inspect
def strict(func):
"""Add some info for functions having strict signature.
"""
arg_sig = inspect.getargspec(func)
func.is_strict = True
func.arg_signature = arg_sig
return func
class StrictSignature(type):
def __new__(cls, name, bases, attrs):
func_types = (types.MethodType,) # include types.FunctionType?
# Check each attribute in the class being created.
for attr_name, attr_value in attrs.iteritems():
if isinstance(attr_value, func_types):
# Check every base for #strict functions.
for base in bases:
base_attr = base.__dict__.get(attr_name)
base_attr_is_function = isinstance(base_attr, func_types)
base_attr_is_strict = hasattr(base_attr, "is_strict")
# Assert that inspected signatures match.
if base_attr_is_function and base_attr_is_strict:
assert (inspect.getargspec(attr_value) ==
base_attr.arg_signature)
# If everything passed, create the class.
return super(StrictSignature, cls).__new__(cls, name, bases, attrs)
# Make a base class to try out strictness
class Base:
__metaclass__ = StrictSignature
#strict
def foo(self, a, b, c="blah"):
return a + b + len(c)
def bar(self, x, y, z):
return x
#####
# Now try to make some classes inheriting from Base.
#####
class GoodChild(Base):
# Was declared strict, better match the signature.
def foo(self, a, b, c="blah"):
return c
# Was never declared as strict, so no rules!
def bar(im_a_little, teapot):
return teapot/2
# These below can't even be created. Uncomment and try to run the file
# and see. It's not just that you can't instantiate them, you can't
# even get the *class object* defined at class creation time.
#
#class WrongChild(Base):
# def foo(self, a):
# return super(self.__class__, self).foo(a, 5)
#
#class BadChild(Base):
# def foo(self, a, b, c="halb"):
# return super(self.__class__, self).foo(a, b, c)
Note, like with most "strict" or "private" type ideas in Python, that you are still free to monkey-patch functions onto even a "good class" and those monkey-patched functions don't have to satisfy the signature constraint.
# Instance level
gc = GoodChild()
gc.foo = lambda self=gc: "Haha, I changed the signature!"
# Class level
GoodChild.foo = lambda self: "Haha, I changed the signature!"
and even if you add more complexity to the meta class that checks whenever any method type attributes are updated in the class's __dict__ and keeps making the assert statement when the class is modified, you can still use type.__setattr__ to bypass customized behavior and set an attribute anyway.
In these cases, I imagine Jeff Goldblum as Ian Malcolm from Jurassic Park, looking at you blankly and saying "Consenting adults, uhh, find a way.."

Python: How to update the calls of a third class to the overriden method of the original class?

class ThirdPartyA(object):
def __init__(self):
...
def ...():
...
-------------------
from xxx import ThirdPartyA
class ThirdPartyB(object):
def a(self):
...
#call to ThirdPartyA
....
def b(self):
...
#call to ThirdPartyA
...
def c(self):
...
#call to ThirdPartyA
...
-----------------------------------
from xxx import ThirdPartyA
class MyCodeA(ThirdPartyA):
def __init__(self):
# overriding code
When overriding the __init__ method of A class, how could I instruct B class that it should call MyCodeA instead of ThirdPartyA in all its methods?
The real code is here:
CLass Geoposition: ThirdPartyA
Class GeopositionField: ThirdPartyB
My override to class Geoposition so it returns max 5 decimal digits:
class AccuracyGeoposition(Geoposition):
def __init__(self, latitude, longitude):
if isinstance(latitude, float) or isinstance(latitude, int):
latitude = '{0:.5f}'.format(latitude)
if isinstance(longitude, float) or isinstance(longitude, int):
longitude = '{0:.5f}'.format(longitude)
self.latitude = Decimal(latitude)
self.longitude = Decimal(longitude)
From your updated code, I think what you're trying to do is change GeopositionField. to_python() so that it returns AccuracyGeoposition values instead of Geoposition values.
There's no way to do that directly; the code in GeopositionField explicitly says it wants to construct a Geoposition, so that's what happens.
The cleanest solution is to subclass GeopositionField as well, so you can wrap that method:
class AccuracyGeopositionField(GeopositionField):
def topython(self, value):
geo = super(AccuracyGeopositionField, self).topython(value)
return AccuracyGeoposition(geo.latitude, geo.longitude)
If creating a Geoposition and then re-wrapping the values in an AccuracyGeoposition is insufficient (because accuracy has already been lost), you might be able to pre-process things before calling the super method as well/instead. For example, if the way it deals with list is not acceptable (I realize that's not true here, but it serves as a simple example), but everything else you can just let it do its thing and wrap the result, you could do this:
class AccuracyGeopositionField(GeopositionField):
def topython(self, value):
if isinstance(value, list):
return AccuracyGeoposition(value[0], value[1])
geo = super(AccuracyGeopositionField, self).topython(value)
return AccuracyGeoposition(geo.latitude, geo.longitude)
If worst comes to worst, you may have to reimplement the entire method (maybe by copying, pasting, and modifying its code), but hopefully that will rarely come up.
There are hacky alternatives to this. For example, you could monkeypatch the module to globally replace the Geoposition class with your AccuracyGeoposition class But, while that may save some work up front, you're almost certain to be unhappy with it when you're debugging things later. Systems that are designed for aspect-oriented programming (which is basically controlled monkeypatching) are great, but trying to cram it into systems that were designed to resist it will give you headaches.
Assuming your real code works like your example—that is, every method of B creates a new A instance just to call a method on it and discard it—well, that's a very weird design, but if it makes sense for your use case, you can make it work.
The key here is that classes are first-class objects. Instead of hardcoding A, store the class you want as a member of the B instance, like this:
class B(object):
def __init__(self, aclass=A):
self.aclass = aclass
def a(self):
self.aclass().a()
Now, you just create a B instance with your subclass:
b = B(OverriddenA)
Your edited version does a different strange thing: instead of constructing a new A instance each time to call methods on it, you're calling class methods on A itself. Again, this is probably not what you want—but, if it is, you can do it:
class B(object):
def __init__(self, aclass=A):
self.aclass = aclass
def a(self):
self.aclass.a()
However, more likely you don't really want either of these. You want to take an A instance at construction time, store it, and use it repeatedly. Like this:
class B(object):
def __init__(self, ainstance):
self.ainstance = ainstance
def a(self):
self.ainstance.a()
b1 = B(A())
b2 = B(OverriddenA())
If this all seems abstract and hard to understand… well, that's because we're using meaningless names like A, B, and OverriddenA. If you tell us the actual types you're thinking about, or just plug those types in mechanically, it should make a lot more sense.
For example:
class Vehicle(object):
def move(self):
print('I am a vehicle, and I am moving')
class Operator(object):
def __init__(self, vehicle):
self.vehicle = vehicle
def move(self):
print('I am moving my vehicle')
self.vehicle.move()
class Car(object):
def move(self):
print('I am a car, and I am driving')
driver = Operator(Car())
driver.move()

calling a function from class in python - different way

EDIT2: Thank you all for your help!
EDIT: on adding #staticmethod, it works. However I am still wondering why i am getting a type error here.
I have just started OOPS and am completely new to it. I have a very basic question regarding the different ways I can call a function from a class.
I have a testClass.py file with the code:
class MathsOperations:
def __init__ (self, x, y):
self.a = x
self.b = y
def testAddition (self):
return (self.a + self.b)
def testMultiplication (self):
return (self.a * self.b)
I am calling this class from another file called main.py with the following code:
from testClass import MathsOperations
xyz = MathsOperations(2, 3)
print xyz.testAddition()
This works without any issues. However, I wanted to use the class in a much simpler way.
I have now put the following code in the testClass.py file. I have dropped the init function this time.
class MathsOperations:
def testAddition (x, y):
return x + y
def testMultiplication (a, b):
return a * b
calling this using;
from testClass import MathsOperations
xyz = MathsOperations()
print xyz.testAddition(2, 3)
this doesn't works. Can someone explain what is happening wrongly in case 2? How do I use this class?
The error i get is "TypeError: testAddition() takes exactly 2 arguments (3 given)"
you have to use self as the first parameters of a method
in the second case you should use
class MathOperations:
def testAddition (self,x, y):
return x + y
def testMultiplication (self,a, b):
return a * b
and in your code you could do the following
tmp = MathOperations()
print tmp.testAddition(2,3)
if you use the class without instantiating a variable first
print MathOperation.testAddtion(2,3)
it gives you an error "TypeError: unbound method"
if you want to do that you will need the #staticmethod decorator
For example:
class MathsOperations:
#staticmethod
def testAddition (x, y):
return x + y
#staticmethod
def testMultiplication (a, b):
return a * b
then in your code you could use
print MathsOperations.testAddition(2,3)
disclaimer: this is not a just to the point answer, it's more like a piece of advice, even if the answer can be found on the references
IMHO: object oriented programming in Python sucks quite a lot.
The method dispatching is not very straightforward, you need to know about bound/unbound instance/class (and static!) methods; you can have multiple inheritance and need to deal with legacy and new style classes (yours was old style) and know how the MRO works, properties...
In brief: too complex, with lots of things happening under the hood. Let me even say, it is unpythonic, as there are many different ways to achieve the same things.
My advice: use OOP only when it's really useful. Usually this means writing classes that implement well known protocols and integrate seamlessly with the rest of the system. Do not create lots of classes just for the sake of writing object oriented code.
Take a good read to this pages:
http://docs.python.org/reference/datamodel.html
http://docs.python.org/tutorial/classes.html
you'll find them quite useful.
If you really want to learn OOP, I'd suggest starting with a more conventional language, like Java. It's not half as fun as Python, but it's more predictable.
class MathsOperations:
def __init__ (self, x, y):
self.a = x
self.b = y
def testAddition (self):
return (self.a + self.b)
def testMultiplication (self):
return (self.a * self.b)
then
temp = MathsOperations()
print(temp.testAddition())
Your methods don't refer to an object (that is, self), so you should
use the #staticmethod decorator:
class MathsOperations:
#staticmethod
def testAddition (x, y):
return x + y
#staticmethod
def testMultiplication (a, b):
return a * b
You need to have an instance of a class to use its methods. Or if you don't need to access any of classes' variables (not static parameters) then you can define the method as static and it can be used even if the class isn't instantiated. Just add #staticmethod decorator to your methods.
class MathsOperations:
#staticmethod
def testAddition (x, y):
return x + y
#staticmethod
def testMultiplication (a, b):
return a * b
docs: http://docs.python.org/library/functions.html#staticmethod

Categories

Resources