Python- Multiple dynamic inheritance - python

I'm having trouble with getting multiple dynamic inheritance to work. These examples make the most sense to me(here and here), but there's not enough code in one example for me to really understand what's going on and the other example doesn't seem to be working when I change it around for my needs (code below).
I'm creating a universal tool that works with multiple software packages. In one software, I need to inherit from 2 classes: 1 software specific API mixin, and 1 PySide class. In another software I only need to inherit from the 1 PySide class.
The least elegant solution that I can think of is to just create 2 separate classes (with all of the same methods) and call either one based on the software that's running. I have a feeling there's a better solution.
Here's what I'm working with:
## MainWindow.py
import os
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
# Build class
def build_main_window(*arg):
class Build(arg):
def __init__(self):
super( Build, self ).__init__()
# ----- a bunch of methods
# Get software
software = os.getenv('SOFTWARE')
# Run tool
if software == 'maya':
build_main_window(maya_mixin_class, QtGui.QWidget)
if software == 'houdini':
build_main_window(QtGui.QWidget)
I'm currently getting this error:
# class Build(arg):
# TypeError: Error when calling the metaclass bases
# tuple() takes at most 1 argument (3 given) #
Thanks for any help!
EDIT:
## MainWindow.py
import os
# Build class
class BuildMixin():
def __init__(self):
super( BuildMixin, self ).__init__()
# ----- a bunch of methods
def build_main_window(*args):
return type('Build', (BuildMixin, QtGui.QWidget) + args, {})
# Get software
software = os.getenv('SOFTWARE')
# Run tool
if software == 'maya':
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
Build = build_main_window(MayaQWidgetDockableMixin)
if software == 'houdini':
Build = build_main_window()

The error in your original code is caused by failing to use tuple expansion in the class definition. I would suggest simplifying your code to this:
# Get software
software = os.getenv('SOFTWARE')
BaseClasses = [QtGui.QWidget]
if software == 'maya':
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
BaseClasses.insert(0, MayaQWidgetDockableMixin)
class Build(*BaseClasses):
def __init__(self, parent=None):
super(Build, self).__init__(parent)
UPDATE:
The above code will only work with Python 3, so it looks like a solution using type() will be needed for Python 2. From the other comments, it appears that the MayaQWidgetDockableMixin class may be a old-style class, so a solution like this may be necessary:
def BaseClass():
bases = [QtGui.QWidget]
if software == 'maya':
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
class Mixin(MayaQWidgetDockableMixin, object): pass
bases.insert(0, Mixin)
return type('BuildBase', tuple(bases), {})
class Build(BaseClass()):
def __init__(self, parent=None):
super(Build, self).__init__(parent)

arg is a tuple, you can't use a tuple as a base class.
Use type() to create a new class instead; it takes a class name, a tuple of base classes (can be empty) and the class body (a dictionary).
I'd keep the methods for your class in a mix-in method:
class BuildMixin():
def __init__(self):
super(BuildMixin, self).__init__()
# ----- a bunch of methods
def build_main_window(*arg):
return type('Build', (BuildMixin, QtGui.QWidget) + args, {})
if software == 'maya':
Build = build_main_window(maya_mixin_class)
if software == 'houdini':
Build = build_main_window()
Here, args is used as an additional set of classes to inherit from. The BuildMixin class provides all the real methods, so the third argument to type() is left empty (the generated Build class has an empty class body).
Since QtGui.QWidget is common between the two classes, I just moved that into the type() call.

Related

Object Factory design to initialize parent or child class object

I am building a tool that takes directories as inputs and performs actions where necessary. These actions vary depeding on certain variables so I created a few class objects which help me with my needs in an organised fashion.
However, I hit a wall figuring out how to best design the following scenario.
For the sake of simplicity, let's assume there are only directories (no files). Also, the below is a heavily simplified example.
I have the following parent class:
# directory.py
from pathlib import Path
class Directory:
def __init__(self, absolute_path):
self.path = Path(absolute_path)
def content(self):
return [Directory(c) for c in self.path.iterdir()]
So, I have a method in the parent class that returns Directory instances for each directory inside the initial directory in absolute_path
What the above does, is hold all methods that can be performed on all directories. Now, I have a separate class that inherits from the above and adds further methods.
# special_directory.py
from directory import Directory
class SpecialDirectory(Directory):
def __init__(self, absolute_path):
super().__init__(absolute_path)
# More methods
I am using an Object Factory like approach to build one or the other based on a condition like so:
# directory_factory.py
from directory import Directory
from special_directory import SpecialDirectory
def pick(path):
return SpecialDirectory(path) if 'foo' in path else Directory(path)
So, if 'foo' exists in the path, it should be a SpecialDirectory instead allowing it to do everything Directory does plus more.
The problem I'm facing is with the content() method. Both should be able to do that but I don't want it to be limited to making a list of Directory instances. If any of its content has "foo*", it should be a SpecialDirectory.
Directory doesn't (and shouldn't) know about SpecialDirectory, so I tried importing and using the factory but it complains about some circular import (which makes sense).
I am not particularly stuck as I have come up with a temp fix, but it isn't pretty. So I was hoping I could get some tips as to what would be an effective and clean solution for this specific situation.
What you need is sometimes called a "virtual constructor" which is a way to allow subclasses to determine what type of class instance is created when calling the base class constructor. There's no such thing in Python (or C++ for that matter), but you can simulate them. Below is an example of a way of doing this.
Note this code is very similar to what's in my answer to the question titled Improper use of __new__ to generate classes? (which has more information about the technique). Also see the one to What exactly is a Class Factory?
from pathlib import Path
class Directory:
subclasses = []
#classmethod
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses.append(cls)
def __init__(self, absolute_path):
self.path = Path(absolute_path)
def __new__(cls, path):
""" Create instance of appropriate subclass. """
for subclass in cls.subclasses:
if subclass.pick(path):
return object.__new__(subclass)
else:
return object.__new__(cls) # Default is this base class.
def content(self):
return [Directory(c) for c in self.path.iterdir()]
def __repr__(self):
classname = type(self).__name__
return f'{classname}(path={self.path!r})'
# More methods
...
class SpecialDirectory(Directory):
def __init__(self, absolute_path):
super().__init__(absolute_path)
#classmethod
def pick(cls, path):
return 'foo' in str(path)
# More methods
...
if __name__ == '__main__':
root = './_obj_factory_test'
d = Directory(root)
print(d.content())

How to effectively use a base class

I think using a base class would be very helpful for a set of classes I am defining for an application. In the (possibly incorrect) example below, I outline what I'm going for: a base class containing an attribute that I won't want to define multiple times. In this case, the base class will define the base part of a file path that each child class will then use to build out their own more specific paths.
However, it seems like I'd have to type in parent_path to the __init__ method of the children classes anyway, regardless of the use of single inheritance from the base class.
import pathlib
class BaseObject:
def __init__(self, parent_path: pathlib.Path):
self.parent_path = parent_path
class ChildObject(BaseObject):
def __init__(self, parent_path: pathlib.Path, child_path: pathlib.Path):
super(ChildObject, self).__init__()
self.full_path = parent_path.joinpath(child_path)
class ChildObject2(BaseObject):
...
class ChildObject3(BaseObject):
...
If this is the case, then is there any reason to use inheritance from a base class like this, other than to make it clearer what my implementation is trying to do?
I don't see an advantage for this implementation. As you've noted, you still have to pass the parent_path into the child instantiation. You also have to call the parent's __init__, which counteracts the one-line clarity "improvement".
For my eyes, you've already made it clear by using good attribute names. I'd switch from parent_path to base_path, so the reader doesn't look for a parent object.
Alternately, you might want to make that a class attribute of the parent: set it once, and let all the objects share it by direct reference, rather than passing in the same value for every instantiation.
Yes, it is correct that you have to provide parent_path into the __init__ call of the parent, that is super(ChildObject, self).__init__(parent_path) (you missed to provide parent_path in your example).
However, this is Python, so there is usually help so you can avoid writing boilerplate code. In this case, I would recommend to use the attrs library. With this you can even avoid writing your init classes all together.
To get a usefulness of such inheritance scheme - make your BaseObject more flexible and accept optional (keyword) arguments:
import pathlib
class BaseObject:
def __init__(self, parent_path: pathlib.Path, child_path: pathlib.Path=None):
self.parent_path = parent_path
self.full_path = parent_path.joinpath(child_path) if child_path else parent_path
class ChildObject(BaseObject):
...
class ChildObject2(BaseObject):
...
class ChildObject3(BaseObject):
...
co = ChildObject(pathlib.Path('.'), pathlib.Path('../text_files'))
print(co, vars(co))
# <__main__.ChildObject object at 0x7f1a664b49b0> {'parent_path': PosixPath('.'), 'full_path': PosixPath('../text_files')}

Extending functionality of a Python library class which is part of a structure

I am working with the Python canmatrix library (well, presently my Python3 fork) which provides a set of classes for an in-memory description of CAN network messages as well as scripts for importing and exporting to and from on-disk representations (various standard CAN description file formats).
I am writing a PyQt application using the canmatrix library and would like to add some minor additional functionality to the bottom level Signal class. Note that a CanMatrix organizes it's member Frames which in turn organize it's member Signals. The whole structure is created by an import script which reads a file. I would like to retain the import script and sub-member finder functions of each layer but add an extra 'value' member to the Signal class as well as getters/setters that can trigger Qt signals (not related to the canmatrix Signal objects).
It seems that standard inheritance approaches would require me to subclass every class in the library and override every function which creates the library Signal to use mine instead. Ditto for the import functions. This just seems horribly excessive to add non-intrusive functionality to a library.
I have tried inheriting and replacing the library class with my inherited one (with and without the pass-through constructor) but the import still creates library classes, not mine. I forget if I copied this from this other answer or not, but it's the same structure as referenced there.
class Signal(QObject, canmatrix.Signal):
_my_signal = pyqtSignal(int)
def __init__(self, *args, **kwargs):
canmatrix.Signal.__init__(self, *args, **kwargs)
# TODO: what about QObject
print('boo')
def connect(self, target):
self._my_signal.connect(target)
def set_value(self, value):
self._my_value = value
self._my_signal.emit(value)
canmatrix.Signal = Signal
print('overwritten')
Is there a direct error in my attempt here?
Am I doing this all wrong and need to go find some (other) design pattern?
My next attempt involved shadowing each instance of the library class. For any instance of the library class that I want to add the functionality to I must construct one of my objects which will associate itself with the library-class object. Then, with an extra layer, I can get from either object to the other.
class Signal(QObject):
_my_signal = pyqtSignal(int)
def __init__(self, signal):
signal.signal = self
self.signal = signal
# TODO: what about QObject parameters
QObject.__init__(self)
self.value = None
def connect(self, target):
self._my_signal.connect(target)
def set_value(self, value):
self.value = value
self._my_signal.emit(value)
The extra layer is annoying (library_signal.signal.set_value() rather than library_signal.set_value()) and the mutual references seem like they may keep both objects from ever getting cleaned up.
This does run and function, but I suspect there's still a better way.

Plugin architecture - Plugin Manager vs inspecting from plugins import *

I'm currently writing an application which allows the user to extend it via a 'plugin' type architecture. They can write additional python classes based on a BaseClass object I provide, and these are loaded against various application signals. The exact number and names of the classes loaded as plugins is unknown before the application is started, but are only loaded once at startup.
During my research into the best way to tackle this I've come up with two common solutions.
Option 1 - Roll your own using imp, pkgutil, etc.
See for instance, this answer or this one.
Option 2 - Use a plugin manager library
Randomly picking a couple
straight.plugin
yapsy
this approach
My question is - on the proviso that the application must be restarted in order to load new plugins - is there any benefit of the above methods over something inspired from this SO answer and this one such as:
import inspect
import sys
import my_plugins
def predicate(c):
# filter to classes
return inspect.isclass(c)
def load_plugins():
for name, obj in inspect.getmembers(sys.modules['my_plugins'], predicate):
obj.register_signals()
Are there any disadvantages to this approach compared to the ones above? (other than all the plugins must be in the same file) Thanks!
EDIT
Comments request further information... the only additional thing I can think to add is that the plugins use the blinker library to provide signals that they subscribe to. Each plugin may subscribe to different signals of different types and hence must have its own specific "register" method.
Since Python 3.6 a new class method __init_subclass__ is added, that is called on a base class, whenever a new subclass is created.
This method can further simplify the solution offered by will-hart above, by removing the metaclass.
The __init_subclass__ method was introduced with PEP 487: Simpler customization of class creation. The PEP comes with a minimal example for a plugin architecture:
It is now possible to customize subclass creation without using a
metaclass. The new __init_subclass__ classmethod will be called on
the base class whenever a new subclass is created:
class PluginBase:
subclasses = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses.append(cls)
class Plugin1(PluginBase):
pass
class Plugin2(PluginBase):
pass
The PEP example above stores references to the classes in the Plugin.plugins field.
If you want to store instances of the plugin classes, you can use a structure like this:
class Plugin:
"""Base class for all plugins. Singleton instances of subclasses are created automatically and stored in Plugin.plugins class field."""
plugins = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.plugins.append(cls())
class MyPlugin1(Plugin):
def __init__(self):
print("MyPlugin1 instance created")
def do_work(self):
print("Do something")
class MyPlugin2(Plugin):
def __init__(self):
print("MyPlugin2 instance created")
def do_work(self):
print("Do something else")
for plugin in Plugin.plugins:
plugin.do_work()
which outputs:
MyPlugin1 instance created
MyPlugin2 instance created
Do something
Do something else
The metaclass approach is useful for this issue in Python < 3.6 (see #quasoft's answer for Python 3.6+). It is very simple and acts automatically on any imported module. In addition, complex logic can be applied to plugin registration with very little effort. It requires:
The metaclass approach works like the following:
1) A custom PluginMount metaclass is defined which maintains a list of all plugins
2) A Plugin class is defined which sets PluginMount as its metaclass
3) When an object deriving from Plugin - for instance MyPlugin is imported, it triggers the __init__ method on the metaclass. This registers the plugin and performs any application specific logic and event subscription.
Alternatively if you put the PluginMount.__init__ logic in PluginMount.__new__ it is called whenver a new instance of a Plugin derived class is created.
class PluginMount(type):
"""
A plugin mount point derived from:
http://martyalchin.com/2008/jan/10/simple-plugin-framework/
Acts as a metaclass which creates anything inheriting from Plugin
"""
def __init__(cls, name, bases, attrs):
"""Called when a Plugin derived class is imported"""
if not hasattr(cls, 'plugins'):
# Called when the metaclass is first instantiated
cls.plugins = []
else:
# Called when a plugin class is imported
cls.register_plugin(cls)
def register_plugin(cls, plugin):
"""Add the plugin to the plugin list and perform any registration logic"""
# create a plugin instance and store it
# optionally you could just store the plugin class and lazily instantiate
instance = plugin()
# save the plugin reference
cls.plugins.append(instance)
# apply plugin logic - in this case connect the plugin to blinker signals
# this must be defined in the derived class
instance.register_signals()
Then a base plugin class which looks like:
class Plugin(object):
"""A plugin which must provide a register_signals() method"""
__metaclass__ = PluginMount
Finally, an actual plugin class would look like the following:
class MyPlugin(Plugin):
def register_signals(self):
print "Class created and registering signals"
def other_plugin_stuff(self):
print "I can do other plugin stuff"
Plugins can be accessed from any python module that has imported Plugin:
for plugin in Plugin.plugins:
plugin.other_plugin_stuff()
See the full working example
The approach from will-hart was the most useful one to me!
For i needed more control I wrapped the Plugin Base class in a function like:
def get_plugin_base(name='Plugin',
cls=object,
metaclass=PluginMount):
def iter_func(self):
for mod in self._models:
yield mod
bases = not isinstance(cls, tuple) and (cls,) or cls
class_dict = dict(
_models=None,
session=None
)
class_dict['__iter__'] = iter_func
return metaclass(name, bases, class_dict)
and then:
from plugin import get_plugin_base
Plugin = get_plugin_base()
This allows to add additional baseclasses or switching to another metaclass.

Printing all instances of a class

With a class in Python, how do I define a function to print every single instance of the class in a format defined in the function?
I see two options in this case:
Garbage collector
import gc
for obj in gc.get_objects():
if isinstance(obj, some_class):
dome_something(obj)
This has the disadvantage of being very slow when you have a lot of objects, but works with types over which you have no control.
Use a mixin and weakrefs
from collections import defaultdict
import weakref
class KeepRefs(object):
__refs__ = defaultdict(list)
def __init__(self):
self.__refs__[self.__class__].append(weakref.ref(self))
#classmethod
def get_instances(cls):
for inst_ref in cls.__refs__[cls]:
inst = inst_ref()
if inst is not None:
yield inst
class X(KeepRefs):
def __init__(self, name):
super(X, self).__init__()
self.name = name
x = X("x")
y = X("y")
for r in X.get_instances():
print r.name
del y
for r in X.get_instances():
print r.name
In this case, all the references get stored as a weak reference in a list. If you create and delete a lot of instances frequently, you should clean up the list of weakrefs after iteration, otherwise there's going to be a lot of cruft.
Another problem in this case is that you have to make sure to call the base class constructor. You could also override __new__, but only the __new__ method of the first base class is used on instantiation. This also works only on types that are under your control.
Edit: The method for printing all instances according to a specific format is left as an exercise, but it's basically just a variation on the for-loops.
You'll want to create a static list on your class, and add a weakref to each instance so the garbage collector can clean up your instances when they're no longer needed.
import weakref
class A:
instances = []
def __init__(self, name=None):
self.__class__.instances.append(weakref.proxy(self))
self.name = name
a1 = A('a1')
a2 = A('a2')
a3 = A('a3')
a4 = A('a4')
for instance in A.instances:
print(instance.name)
You don't need to import ANYTHING! Just use "self". Here's how you do this
class A:
instances = []
def __init__(self):
self.__class__.instances.append(self)
print('\n'.join(A.instances)) #this line was suggested by #anvelascos
It's this simple. No modules or libraries imported
Very nice and useful code, but it has a big problem: list is always bigger and it is never cleaned-up, to test it just add print(len(cls.__refs__[cls])) at the end of the get_instances method.
Here a fix for the get_instances method:
__refs__ = defaultdict(list)
#classmethod
def get_instances(cls):
refs = []
for ref in cls.__refs__[cls]:
instance = ref()
if instance is not None:
refs.append(ref)
yield instance
# print(len(refs))
cls.__refs__[cls] = refs
or alternatively it could be done using WeakSet:
from weakref import WeakSet
__refs__ = defaultdict(WeakSet)
#classmethod
def get_instances(cls):
return cls.__refs__[cls]
Same as almost all other OO languages, keep all instances of the class in a collection of some kind.
You can try this kind of thing.
class MyClassFactory( object ):
theWholeList= []
def __call__( self, *args, **kw ):
x= MyClass( *args, **kw )
self.theWholeList.append( x )
return x
Now you can do this.
object= MyClassFactory( args, ... )
print MyClassFactory.theWholeList
Python doesn't have an equivalent to Smallktalk's #allInstances as the architecture doesn't have this type of central object table (although modern smalltalks don't really work like that either).
As the other poster says, you have to explicitly manage a collection. His suggestion of a factory method that maintains a registry is a perfectly reasonable way to do it. You may wish to do something with weak references so you don't have to explicitly keep track of object disposal.
It's not clear if you need to print all class instances at once or when they're initialized, nor if you're talking about a class you have control over vs a class in a 3rd party library.
In any case, I would solve this by writing a class factory using Python metaclass support. If you don't have control over the class, manually update the __metaclass__ for the class or module you're tracking.
See http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html for more information.
In my project, I faced a similar problem and found a simple solution that may also work for you in listing and printing your class instances. The solution worked smoothly in Python version 3.7; gave partial errors in Python version 3.5.
I will copy-paste the relevant code blocks from my recent project.
```
instances = []
class WorkCalendar:
def __init__(self, day, patient, worker):
self.day = day
self.patient = patient
self.worker= worker
def __str__(self):
return f'{self.day} : {self.patient} : {self.worker}'
In Python the __str__ method in the end, determines how the object will be interpreted in its string form. I added the : in between the curly brackets, they are completely my preference for a "Pandas DataFrame" kind of reading. If you apply this small __str__ function, you will not be seeing some machine-readable object type descriptions- which makes no sense for human eyes. After adding this __str__ function you can append your objects to your list and print them as you wish.
appointment= WorkCalendar("01.10.2020", "Jane", "John")
instances.append(appointment)
For printing, your format in __str__ will work as default. But it is also possible to call all attributes separately:
for instance in instances:
print(instance)
print(instance.worker)
print(instance.patient)
For detailed reading, you may look at the source: https://dbader.org/blog/python-repr-vs-str

Categories

Resources