I've got two class trees in my Python code:
BaseComponent BaseSeries
/ | \ |
Resistor Capacitor Inductor ESeries
The BaseSeries class tree implements preferred numbers such as the E-series, and generates sets of values between a pair of powers (e.g. [1.0, 2.2, 4.7, 10, 22, 47, 100, 220, 470] for the E3 series with exponents between 1 to 3).
By default, ESeries and any other instance of BaseSeries creates sequences of float objects. I'd like to use these classes to instead create sequences of Resistor, Capacitor and Inductor objects. Ideally, the individual Resistor, Capacitor, Inductor and ESeries classes would remain usable on their own (i.e. not rely on methods being implemented by other classes).
This sounds like a job for multiple inheritance, but I'm a bit confused about how best to implement this in Python (3). Ideally I'd like to just define something like:
class ResistorESeries(Resistor, ESeries):
pass
class CapacitorESeries(Capacitor, ESeries):
pass
class InductorESeries(Inductor, ESeries):
pass
in order to create classes that yield sequences of resistors, capacitors and inductors, but I don't know how best to tell BaseSeries instances to create objects of type Resistor, Capacitor and Inductor. I can think of two ways, but I can't decide which one is best, and I have a feeling there is a simpler, more Pythonic way that I'm missing:
have BaseSeries contain a property or variable pointing to the element type (e.g. Resistor) set either by the constructor, a class variable in the child class (e.g. Resistor.ELEMENT_TYPE = Resistor) or with an abstract property provided by the child class:
class BaseSeries(object):
...
def elements(self):
# loop over numbers in this series
for v in self.values():
yield self.element_type(v)
#property
#abc.abstractmethod
def element_type(self):
return NotImplemented
class ESeries(BaseSeries):
....
class BaseComponent(object):
...
#property
def element_type(self):
return self
class Resistor(BaseComponent):
...
class ResistorESeries(Resistor, ESeries):
# now BaseSeries' `element_type` property is provided by `BaseComponent`
pass
This would mean ESeries cannot be used on its own as a concrete object, as it does not implement this property/variable, which is not ideal.
use self when creating elements in BaseSeries, where self will, as long as Resistor is earlier in the method resolution order, refer to the desired element:
class BaseSeries(object):
...
def elements(self):
# loop over numbers in this series
for v in self.values():
# self here would refer to `Resistor` in
# `ResistorESeries` instances
yield self(v)
class ESeries(BaseSeries):
....
class BaseComponent(object):
...
class Resistor(BaseComponent):
...
class ResistorESeries(Resistor, ESeries):
pass
This has the downside that, in instances of ESeries without being used as a mix-in, self will refer to itself, which does not support the correct __init__ signature.
So, does anyone have an idea of how best to do this in a Pythonic way, with maximum ability to reuse classes on their own?
You are likely mixing some concepts there - notably "instances" and "classes" - your example calls that do self(v) are perplexing.
I can't see from your design why the classes on the BaseComponent tree would need to be inherited along the BaseSeries tree: can't the component type simply be an attribute on the BaseSeries class?
It is simply a matter of using a class attribute, and in the code suggested on your first attempt, use a prosaic if statement.
class BaseSeries:
component = None
def elements(self):
# loop over numbers in this series
for v in self.values():
yield self.component(v) if self.component else v
class Capacitor(BaseComponent):
...
class CapacitorSeries(BaseSeries):
component = Capacitor
If you think you need multiple inheritance, you can just go for your idea of using a property, and use the same "if" statement there. But if both hierarchies are that ortogonal, I don't see why force the use of multiple inheritance, just because the language permits it.
Maybe you prefer to have it the other way around: a factory method on the component tree that will take an ESeries class as input, and extract the values from that ...
Anyway, you are not making clear the disctinction of classes and instances there. Do you need to have a way to produce several subclasses of
"CapacitorESeries", each class for a different value?
Or would you need just instances of "Capacitors", each for a different value produced on the series?
class BaseComponent:
...
#classmethod
def series_factory(self, series):
for value in series.values():
yield self.__class__(value)
Of course, there could be use cases for really needing classes for everything you claim, including the factories for series of classes, but your use of self as a callable in your snippets suggests that your stance on that is not that solid.
In that case, first, you need all methods to make proper use of super. even if they ar enot supposed to exist across both hierarchies, using super will just call the proper method on the superclasses. But for methods like __init__ this is just needed.
If you design a proper __init__ method using super, and always using named parameters, your second strategy will work out of the box, just fixing the instantiating call (to something other than self(v). Using named parameters and passing the remaining parameters to super will ensure each class in the tree consumes what it needs of those parameters - and when Python gets to the root of both your hierarchies and calls object's __init__, no parameter is remaining,
class BaseSeries:
def __init__(self, value_min, value_max, **kwargs):
self.value_min = value_min
selfvalue_max = value_max
super().__init__(**kwargs)
def elements(self):
# loop over numbers in this series
for v in self.values():
yield self.__class__(value_min = self.value_min, value_max=self.value_max, value=value)
class BaseComponent:
def __init__(self, value, **kwargs):
self.value = value
...
class CapacitorESeries(Capacitor, Eseries):
pass
Related
I want to define an abstract base class, called ParentClass. I want every child class of ParentClass to have a method called "fit" that defines a property "required". If someone tries to create a child class which does not have the "required" property within its fit method, I want an error to be created when the object's fit method is called. I am having trouble doing this.
The context is that I want to create a Parent class, abstract or otherwise, that requires its children to behave in a certain way and have certain properties so that I can trust them to behave in certain ways no matter who is creating children classes. I have found similar questions, but nothing precisely like what I am asking.
My naive attempt was something like the following:
class ParentClass(ABC):
#abstractmethod
def fit(self):
self.required = True
class ChildClass(ParentClass):
def __init__(self):
pass
def fit(self):
self.required = True
class ChildClass2(ParentClass):
def __init__(self):
pass
def fit(self):
self.not_essential = True
This doesn't work, but if possible I would like to refactor ParentClass in such a way that if someone runs:
>> b = ChildClass()
>> b.fit()
everything works fine, but if someone tries to run
>> b2 = ChildClass2()
>> b2.fit()
an error is thrown because the fit method of ChildClass2 doesn't define "required".
Is this possible in Python?
A related question is whether there is a better way to think about structuring my problem. Perhaps there is a better paradigm to achieve what I want? I understand that I can force child classes to have certain methods defined. A clunky way to achieve what I want is to have every property I want to be defined to be returned by a required method, but this feels very clunky, particularly if the number of properties I want to enforce as part of a standard becomes rather large.
Related: inspect.getmembers in order?
Background: I'm trying to build a tool that creates a Python file according to a certain specification. One option is to give, as input, a Python module that contains an abstract class declaration, and creates a base class that inherits that abstract class but also adds a default implementation to all abstract methods.
For example: say I have the following file, called Abstract.py that contains the following:
class Abstract(object):
__metaclass__ = ABCMeta
#abstractmethod
def first(self):
pass
#abstractmethod
def second(self):
pass
So then the output of my tool would be a file called BaseClass.py that contains:
class BaseClass(Abstract):
def first(self):
pass
def second(self):
pass
I want the methods in the BaseClass to be in the same order as in Abstract.
My question is: Is there a way to sort the methods according to their appearance without relying on built-in method comparison (which is based on memory address comparison)? I'm also hoping to avoid any kind of file-parsing, if possible.
Please note that I cannot create an instance of Abstract so the solution mentioned in the above related question will not work for me.
At the time of class creation in Python2 (that is, when the interpreter just get pass the class body when running a file, which happens in sequence) - the Class itself is created as an object. At this point all variables and methods defined in the class body are passed to a call to "type" (which is the default metaclass) as a dictionary.
As you know, dictionaries in Python have no ordering, so ordinarily it is impossible in Python2. It is possible in Python 3 because metaclasses can implement a __prepare__ method which returns the mapping object that will be used to build the class body - so instead of an ordinary dict, __prepare__ could return an OrderedDict.
However, in your case, all relevant methods are decorated with #abstractmethod - we can take advantage of that to not only annotate the methods as abstract, but also mark down the order in which they appear.
You can either wrap the abstractclass decorator, or create another decorator and use both. I'd favor a new decorator that would do both things, in order to keep linecount down.
Also, you have to choose how will you keep the order of the methods and make use of it. Ordinarily iterating on the class's attributes will just iterate over a dictionary (rather a dictionary proxy), which is unorderd- so, you have to have keep a data structure were to keep the ordered methods available, and a way to record this given order. There are are some options there - but maybe, the most direct thing is to annotate the method order in the methods themselves, and retrieve them with a call to built-in sorted with an appropriate key parameter. Other means would require eithe a class decorator or a custom metaclass to work.
So here is an example of what I wrote about:
from abc import abstractmethod, ABCMeta
class OrderedAbstractMethod(object):
def __init__(self):
self.counter = 0
def __call__(self,func):
func._method_order = self.counter
self.counter += 1
return abstractmethod(func)
ordered_abstract_method = OrderedAbstractMethod()
class Abstract(object):
__metaclass__ = ABCMeta
#ordered_abstract_method
def first(self):
pass
#ordered_abstract_method
def second(self):
pass
#ordered_abstract_method
def third(self):
pass
#ordered_abstract_method
def fourth(self):
pass
print "Unordered methods: ",[method[0] for method in Abstract.__dict__.items() if not method[0].startswith("_") ]
# here it printed out - ['second', 'third', 'fourth', 'first']
print "Ordered methods: ", sorted([method for method in Abstract.__dict__.items() if not method[0].startswith("_") ], key= lambda m: m[1]._method_order)
I have class Base. I'd like to extend its functionality in a class Derived. I was planning to write:
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
# ...
def derived_method1(self):
# ...
Sometimes I already have a Base instance, and I want to create a Derived instance based on it, i.e., a Derived instance that shares the Base object (doesn't re-create it from scratch). I thought I could write a static method to do that:
b = Base(arg1, arg2) # very large object, expensive to create or copy
d = Derived.from_base(b, derived_arg1, derived_arg2) # reuses existing b object
but it seems impossible. Either I'm missing a way to make this work, or (more likely) I'm missing a very big reason why it can't be allowed to work. Can someone explain which one it is?
[Of course, if I used composition rather than inheritance, this would all be easy to do. But I was hoping to avoid the delegation of all the Base methods to Derived through __getattr__.]
Rely on what your Base class is doing with with base_arg1, base_arg2.
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
...
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
...
#classmethod
def from_base(cls, b, da1, da2):
return cls(b.base_arg1, b.base_arg2, da1, da2)
The alternative approach to Alexey's answer (my +1) is to pass the base object in the base_arg1 argument and to check, whether it was misused for passing the base object (if it is the instance of the base class). The other agrument can be made technically optional (say None) and checked explicitly when decided inside the code.
The difference is that only the argument type decides what of the two possible ways of creation is to be used. This is neccessary if the creation of the object cannot be explicitly captured in the source code (e.g. some structure contains a mix of argument tuples, some of them with the initial values, some of them with the references to the existing objects. Then you would probably need pass the arguments as the keyword arguments:
d = Derived(b, derived_arg1=derived_arg1, derived_arg2=derived_arg2)
Updated: For the sharing the internal structures with the initial class, it is possible using both approaches. However, you must be aware of the fact, that if one of the objects tries to modify the shared data, the usual funny things can happen.
To be clear here, I'll make an answer with code. pepr talks about this solution, but code is always clearer than English. In this case Base should not be subclassed, but it should be a member of Derived:
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
class Derived(object):
def __init__(self, base, derived_arg1, derived_arg2):
self.base = base
self.derived_arg1 = derived_arg1
self.derived_arg2 = derived_arg2
def derived_method1(self):
return self.base.base_arg1 * self.derived_arg1
I am trying to define a number of classes based on an abstract base class. Each of these classes basically defines a cell shape for a visualisation package. The cell is comprised of a number of vertices (points) and each subclass will require a different number of points. Each class can be thought of as a container for a fixed number of point coordinates.
As an example, consider the base class Shape, which is simply a container for a list of coordinates:
class Shape(object):
"""Cell shape base class."""
def __init__(self, sequence):
self.points = sequence
#property
def points(self):
return self._points
#points.setter
def points(self, sequence):
# Error checking goes here, e.g. check that `sequence` is a
# sequence of numeric values.
self._points = sequence
Ideally I want to be able to define, say, a Square class, where the points.setter method checks that sequence is of length four. Furthermore I would like a user to not be able to instantiate Shape. Is there a way I can define Shape to be an abstract base class? I have tried changing the definition of shape to the following:
import abc
class Shape(object):
"""Cell shape base class."""
__metaclass__ = abc.ABCMeta
def __init__(self, sequence):
self.points = sequence
#abc.abstractproperty
def npoints(self):
pass
#property
def points(self):
return self._points
#points.setter
def points(self, sequence):
# Error checking goes here...
if len(sequence) != self.npoints:
raise TypeError('Some descriptive error message!')
self._points = sequence
This requires subclasses to define the property npoints. I can then define a class Square as
class Square(Shape):
#property
def npoints(self):
return 4
However, this would be rather tedious to implement for a large number of sublcasses (and with more than one property to implement). I was hoping to define a class factory which would create my subclasses for me, something along the lines of:
def Factory(name, npoints):
return type(name, (Shape,), dict(npoints=npoints))
Triangle = Factory('Triangle', 3)
Square = Factory('Square', 4)
# etc...
Is this class factory function a valid approach to take, or am I clobbering the npoints property? Is it better to replace the call to type with something more verbose like:
def Factory(name, _npoints):
class cls(Shape):
#property
def npoints(self):
return _npoints
cls.__name__ = name
return cls
An alternative approach would be to define a class attribute _NPOINTS and change the npoints
property of Shape to
#property
def npoints(self):
return _NPOINTS
However, then I loose the benefit of using an abstract base class since:
I can't see how to define a class attribute using type, and
I don't know how to define an abstract class attribute.
Does anyone have any thoughts on the best way to implement this abstract base class and class factory function, or even an altogether better design?
Without knowing more about your project, I cannot give specific advice on the general design. I will just provide a few more general hints and thoughts.
Dynamically generated classes are often a sign that you don't need separate classes at all – simply write a single class that incorparates all the functionality. What's the problem with a Shape class that gets it's properties at instantiation time? (Of course there are reasons to use dynamically generated classes – the namedtuple() factory function is one example. I couldn't find any specific reasons in your question, however.)
Instead of using abstract base classes, you often simply document the intended interface, and than write classes conforming to this interface. Due to the dynamic nature of Python, you don't strictly need a common base class. There are often other advantages to a common base class – for example shared functionality.
Only check for application code errors if not doing so leads to strange errors in unrelated places. If, say, your function expects an iterable, simply assume you got an iterable. If the user passed in something else, you code will fail when it tries to iterate the passed in object anyway, and the error message will usually be enough for the application developer to understand the error.
I have a number of atomic classes (Components/Mixins, not really sure what to call them) in a library I'm developing, which are meant to be subclassed by applications. This atomicity was created so that applications can only use the features that they need, and combine the components through multiple inheritance.
However, sometimes this atomicity cannot be ensured because some component may depend on another one. For example, imagine I have a component that gives a graphical representation to an object, and another component which uses this graphical representation to perform some collision checking. The first is purely atomic, however the latter requires that the current object already subclassed this graphical representation component, so that its methods are available to it. This is a problem, because we have to somehow tell the users of this library, that in order to use a certain Component, they also have to subclass this other one. We could make this collision component sub class the visual component, but if the user also subclasses this visual component, it wouldn't work because the class is not on the same level (unlike a simple diamond relationship, which is desired), and would give the cryptic meta class errors which are hard to understand for the programmer.
Therefore, I would like to know if there is any cool way, through maybe metaclass redefinition or using class decorators, to mark these unatomic components, and when they are subclassed, the additional dependency would be injected into the current object, if its not yet available. Example:
class AtomicComponent(object):
pass
#depends(AtomicComponent) # <- something like this?
class UnAtomicComponent(object):
pass
class UserClass(UnAtomicComponent): #automatically includes AtomicComponent
pass
class UserClass2(AtomicComponent, UnAtomicComponent): #also works without problem
pass
Can someone give me an hint on how I can do this? or if it is even possible...
edit:
Since it is debatable that the meta class solution is the best one, I'll leave this unaccepted for 2 days.
Other solutions might be to improve error messages, for example, doing something like UserClass2 would give an error saying that UnAtomicComponent already extends this component. This however creates the problem that it is impossible to use two UnAtomicComponents, given that they would subclass object on different levels.
"Metaclasses"
This is what they are for! At time of class creation, the class parameters run through the
metaclass code, where you can check the bases and change then, for example.
This runs without error - though it does not preserve the order of needed classes
marked with the "depends" decorator:
class AutoSubclass(type):
def __new__(metacls, name, bases, dct):
new_bases = set()
for base in bases:
if hasattr(base, "_depends"):
for dependence in base._depends:
if not dependence in bases:
new_bases.add(dependence)
bases = bases + tuple(new_bases)
return type.__new__(metacls, name, bases, dct)
__metaclass__ = AutoSubclass
def depends(*args):
def decorator(cls):
cls._depends = args
return cls
return decorator
class AtomicComponent:
pass
#depends(AtomicComponent) # <- something like this?
class UnAtomicComponent:
pass
class UserClass(UnAtomicComponent): #automatically includes AtomicComponent
pass
class UserClass2(AtomicComponent, UnAtomicComponent): #also works without problem
pass
(I removed inheritance from "object", as I declared a global __metaclass__ variable. All classs will still be new style class and have this metaclass. Inheriting from object or another class does override the global __metaclass__variable, and a class level __metclass__ will have to be declared)
-- edit --
Without metaclasses, the way to go is to have your classes to properly inherit from their dependencies. Tehy will no longer be that "atomic", but, since they could not work being that atomic, it may be no matter.
In the example bellow, classes C and D would be your User classes:
>>> class A(object): pass
...
>>> class B(A, object): pass
...
>>>
>>> class C(B): pass
...
>>> class D(B,A): pass
...
>>>