I'm currently writing an application which allows the user to extend it via a 'plugin' type architecture. They can write additional python classes based on a BaseClass object I provide, and these are loaded against various application signals. The exact number and names of the classes loaded as plugins is unknown before the application is started, but are only loaded once at startup.
During my research into the best way to tackle this I've come up with two common solutions.
Option 1 - Roll your own using imp, pkgutil, etc.
See for instance, this answer or this one.
Option 2 - Use a plugin manager library
Randomly picking a couple
straight.plugin
yapsy
this approach
My question is - on the proviso that the application must be restarted in order to load new plugins - is there any benefit of the above methods over something inspired from this SO answer and this one such as:
import inspect
import sys
import my_plugins
def predicate(c):
# filter to classes
return inspect.isclass(c)
def load_plugins():
for name, obj in inspect.getmembers(sys.modules['my_plugins'], predicate):
obj.register_signals()
Are there any disadvantages to this approach compared to the ones above? (other than all the plugins must be in the same file) Thanks!
EDIT
Comments request further information... the only additional thing I can think to add is that the plugins use the blinker library to provide signals that they subscribe to. Each plugin may subscribe to different signals of different types and hence must have its own specific "register" method.
Since Python 3.6 a new class method __init_subclass__ is added, that is called on a base class, whenever a new subclass is created.
This method can further simplify the solution offered by will-hart above, by removing the metaclass.
The __init_subclass__ method was introduced with PEP 487: Simpler customization of class creation. The PEP comes with a minimal example for a plugin architecture:
It is now possible to customize subclass creation without using a
metaclass. The new __init_subclass__ classmethod will be called on
the base class whenever a new subclass is created:
class PluginBase:
subclasses = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses.append(cls)
class Plugin1(PluginBase):
pass
class Plugin2(PluginBase):
pass
The PEP example above stores references to the classes in the Plugin.plugins field.
If you want to store instances of the plugin classes, you can use a structure like this:
class Plugin:
"""Base class for all plugins. Singleton instances of subclasses are created automatically and stored in Plugin.plugins class field."""
plugins = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.plugins.append(cls())
class MyPlugin1(Plugin):
def __init__(self):
print("MyPlugin1 instance created")
def do_work(self):
print("Do something")
class MyPlugin2(Plugin):
def __init__(self):
print("MyPlugin2 instance created")
def do_work(self):
print("Do something else")
for plugin in Plugin.plugins:
plugin.do_work()
which outputs:
MyPlugin1 instance created
MyPlugin2 instance created
Do something
Do something else
The metaclass approach is useful for this issue in Python < 3.6 (see #quasoft's answer for Python 3.6+). It is very simple and acts automatically on any imported module. In addition, complex logic can be applied to plugin registration with very little effort. It requires:
The metaclass approach works like the following:
1) A custom PluginMount metaclass is defined which maintains a list of all plugins
2) A Plugin class is defined which sets PluginMount as its metaclass
3) When an object deriving from Plugin - for instance MyPlugin is imported, it triggers the __init__ method on the metaclass. This registers the plugin and performs any application specific logic and event subscription.
Alternatively if you put the PluginMount.__init__ logic in PluginMount.__new__ it is called whenver a new instance of a Plugin derived class is created.
class PluginMount(type):
"""
A plugin mount point derived from:
http://martyalchin.com/2008/jan/10/simple-plugin-framework/
Acts as a metaclass which creates anything inheriting from Plugin
"""
def __init__(cls, name, bases, attrs):
"""Called when a Plugin derived class is imported"""
if not hasattr(cls, 'plugins'):
# Called when the metaclass is first instantiated
cls.plugins = []
else:
# Called when a plugin class is imported
cls.register_plugin(cls)
def register_plugin(cls, plugin):
"""Add the plugin to the plugin list and perform any registration logic"""
# create a plugin instance and store it
# optionally you could just store the plugin class and lazily instantiate
instance = plugin()
# save the plugin reference
cls.plugins.append(instance)
# apply plugin logic - in this case connect the plugin to blinker signals
# this must be defined in the derived class
instance.register_signals()
Then a base plugin class which looks like:
class Plugin(object):
"""A plugin which must provide a register_signals() method"""
__metaclass__ = PluginMount
Finally, an actual plugin class would look like the following:
class MyPlugin(Plugin):
def register_signals(self):
print "Class created and registering signals"
def other_plugin_stuff(self):
print "I can do other plugin stuff"
Plugins can be accessed from any python module that has imported Plugin:
for plugin in Plugin.plugins:
plugin.other_plugin_stuff()
See the full working example
The approach from will-hart was the most useful one to me!
For i needed more control I wrapped the Plugin Base class in a function like:
def get_plugin_base(name='Plugin',
cls=object,
metaclass=PluginMount):
def iter_func(self):
for mod in self._models:
yield mod
bases = not isinstance(cls, tuple) and (cls,) or cls
class_dict = dict(
_models=None,
session=None
)
class_dict['__iter__'] = iter_func
return metaclass(name, bases, class_dict)
and then:
from plugin import get_plugin_base
Plugin = get_plugin_base()
This allows to add additional baseclasses or switching to another metaclass.
Related
I'm currently working on a CLI abstraction layer, which abstracts CLI programs as classes in Python. Such CLI programs offer a structured way to enable and configure CLI parameters. It helps checking for faulty inputs and generated properly escaped arguments (e.g. adding double quotes).
Note: The following example is using Git, while in my target application, it will be commercial tools, that don't offer any Python API or similar.
Basic Ideas:
An abstraction of tool Git declares a Git class, which derives from class Program.
This parent class implements common methods to all programs.
CLI options are represented as nested class definitions on the Git class.
Nested classes are marked with a class-based decorator CLIOption derived from Attribute
(see https://github.com/pyTooling/pyAttributes for more details)
CLI options can be enabled / modified via indexed syntax.
An instance of Git is used to enabled / configure CLI parameters and helps to assemble a list of correctly encoded strings that can be used e.g. in subprocess.Popen(...)
tool = Git()
tool[tool.FlagVersion] = True
print(tool.ToArgumentList())
Some Python Code:
from pyAttributes import Attribute
class CLIOption(Attribute): ... # see pyAttributes for more details
class Argument:
_name: ClassVar[str]
def __init_subclass__(cls, *args, name: str = "", **kwargs):
super().__init_subclass__(*args, **kwargs)
cls._name = name
class FlagArgument(Argument): ...
class CommandArgument(Argument): ...
class Program:
__cliOptions__: Dict[Type[Argument], Optional[Argument]]
def __init_subclass__(cls, *args, **kwargs):
"""Hook into subclass creation to register all marked CLI options in ``__cliOptions__``.
super().__init_subclass__(*args, **kwargs)
# get all marked options and
cls.__cliOptions__ = {}
for option in CLIOption.GetClasses():
cls.__cliOptions__[option] = None
class Git(Program):
#CLIOption()
class FlagVersion(FlagArgument, name="--version"): ...
#CLIOption()
class FlagHelp(FlagArgument, name="--help"): ...
#CLIOption()
class CmdCommit(CommandArgument, name="commit"): ...
Observations:
As #martineau pointed out in a comment, the CLIOption decorator has no access to the outer scope. So the scope can't be annotated to the nested classes.
The nested classes are used because of some nice effects in Python not demonstrated here. Also to keep their scope local to a program. Imagine there might be multiple programs offering a FlagVersion flag. Some as -v, but others as --version.
Primary Questions:
How can I check if class FlagVersion is a nested class of class Git?
What I investigated so far:
There is no helper function to achieve this goal like functions isinstance(...) or issubclass(...) are offering.
While root-level classes have a __module__ reference to the outer scope, nested classes have no "pointer" to the next outer scope.
Actually, nested classes have the same __module__ values.
Which makes sense.
A class' __qualname__ includes the names of parent classes.
Unfortunately this is a string like Git.FlagVersion
So I see a possible "ugly" solution using __qualname__ and string operations to check if a class is nested and if it's nested in a certain outer scope.
Algorithm:
Assemble fully qualified name from __module__ and __qualname__.
Check element by element from left to right for matches.
This gets even more complicated if one nested class is defined in a parent class and another nested class is defined in a derived class. Then I also need to look into MRO ... oOo
Secondary Questions:
Is there a better way than using string operations?
Shouldn't Pythons data model offer a better way to get this information?
Because I don’t understand English very well, I understand by translating is that you need to find out how to get the embedded class decorated by CLIOption in the subclass of Program (Git here). If so, the following methods may help you.
I read some codes of some pyAttributes
from pyAttributes import Attribute
class Program(object):
__cliOptions__: Dict[Type[Argument], Optional[Argument]]
def __init_subclass__(cls, *args, **kwargs):
cls.__cliOptions__ = {}
for obj in cls.__dict__.values():
if hasattr(obj, Attribute.__AttributesMemberName__):
print(obj)
# for option in CLIOption.GetClasses():
# cls.__cliOptions__[option] = None
class Git(Program):
a = 1
b = 2
#CLIOption()
class FlagVersion(FlagArgument, name="--version"):
...
#CLIOption()
class FlagHelp(FlagArgument, name="--help"):
...
Of course, the above can’t work directly. Later I found that there was of course an error in the Attribute._AppendAttribute method, as follows, I modified it
class CLIOption(Attribute):
... # see pyAttributes for more details
#staticmethod
def _AppendAttribute(func: Callable, attribute: 'Attribute') -> None:
# inherit attributes and prepend myself or create a new attributes list
if Attribute.__AttributesMemberName__ in func.__dict__:
func.__dict__[Attribute.__AttributesMemberName__].insert(0, attribute)
else:
# The original func.__setattr__(Attribute.__AttributesMemberName__, [attribute]) has an error
# Because __setattr__ of class FlagVersion is object.__setattr__
setattr(func, Attribute.__AttributesMemberName__, [attribute])
# or object.__setattr__(func, Attribute.__AttributesMemberName__, [attribute])
Following the proposed approaches by iterating __dict__ works quite good.
So this was the first solution developed based on the given ideas:
def isnestedclass(cls: Type, scope: Type) -> bool:
for memberName in scope.__dict__:
member = getattr(scope, memberName)
if type(member) is type:
if cls is member:
return True
return False
That solution doesn't work on members inherited from parent classes.
So I extended it with searching through the inheritance graph via mro().
This is my current and final solution for a isnestedclass helper function.
def isnestedclass(cls: Type, scope: Type) -> bool:
for mroClass in scope.mro():
for memberName in mroClass.__dict__:
member = getattr(mroClass, memberName)
if type(member) is type:
if cls is member:
return True
return False
The function is available within the pyTooling package.
I have a base contract class which can be inherited to provide plugin functionality. I'm adding the new plugins using setuptools entrypoints something like
entry_points="""
[plugins]
plugin1=plugins.plugin1:Plugin1
"""
And classes look like...
class Plugin:
__metaclass__ = abc.ABCMeta
#abstractmethod
def must_override_method():
pass
#abstractmethod
def must_override_method2():
pass
#./plugins/plugin1.py
#Actually the plugins could be anywhere
class Plugin1(Plugin):
def must_override_method():
print("Hello")
Although the #abstracmethod doesn't let me instantiate the class at runtime if must_override_methods are not defined but how should I go about adding unit tests for the plugins that are not yet written.
Is there a simple way to write generic test that catches "plugins" that don't implement abstract methods while testing?
I think, the best way is to use mocking for that abstract class. Mocking is a mechanism where it won't really creates an object or try to create. Rather, It will create a mock object which will have same properties. please use mock module for the same
I have an linux based python application, which make use of pygtk and gtk.
It have both UI execution & command line mode execution option.
In UI mode, to create main application window, class definition is
class ToolWindow(common.Singleton, gtk.Window):
def __init__(self):
gtk.Window.__init__(self,gtk.WINDOW_TOPLEVEL)
What I want to do is, if application is able to import gtk and pygtk, then only
class ToolWindow should inherit both common.Singleton and gtk.Window classes, else it should only inherit common.Singleton class.
What is the best way to do it?
You can specify a metaclass where you can test what modules are importable:
class Meta(type):
def __new__(cls, name, bases, attrs):
try:
import gtk
bases += (gtk.Window)
except ImportError:
# gtk module not available
pass
# Create the class with the new bases tuple
return super(Meta, cls).__new__(cls, name, bases, attrs)
class ToolWindow(common.Singleton):
__metaclass__ = Meta
...
This is just a raw sketch, obviously many improvements can be done, but it should help you get started.
You should be aware that you should change your __init__() method from ToolWindow because it may not have the gtk module available (maybe set a flag in the metaclass to later check if the module is available; or you can even redefine the __init__() method from within the metaclass based on whether the module is available or not -- there are several ways of tackling this).
Setup: Python 3.3
I have a base class, called SourceBase, which defines abstract methods and values:
import abc
class SourceBase(object):
__metaclass__=abc.ABCMeta
pluginid='undefined' #OVERRIDE THIS IN YOUR SUBCLASS. If you don't, the program will ignore your plugin.
#abc.abstractmethod
def get_images(self):
'''This method should return a list of URLs.'''
return
#abc.abstractmethod
def get_source_info(self):
'''This method should return a list containing a human friendly name at index 0, and a human readable url describing the source for this repository.
For example, the EarthPorn subreddit returns a list ['EarthPorn Subreddit', 'http://reddit.com/r/EarthPorn'].
This is used to populate the treeview object with your source information.'''
return
#abc.abstractmethod
def get_pluginid(self):
'''This method should return a string that represents this plugins ID.
The pluginid is used to make calls to this plugin when necessary. It should be unique as ids are in a shared pool,
so make sure the id is unique. The id should remain the same even when updated as some settings with the pluginid
are persisted by the main application, and they will be lost if the id changes.
'''
return
This is the superclass of some python plugins I wrote, which subclass this. They are dynamically loaded at runtime, and all of this works, except that even though I added a new abstract method to my SourceBase, the plugins still load. They shouldn't, since none of them have my new method. (I gave it the #abc.abstractmethod marking).
My google-fu doesn't really show anything, so I'm not sure why I can still instanstiate these plugins even though the superclass says they are abstract.
For example, in SourceBase, I added:
#abc.abstractmethod
def get_dependencies(self):
print('ERROR: THIS PLUGIN SHOULD NOT HAVE LOADED.')
'''This method should return a list of package names. The host program will check if these packages are available.
If they are not available, the plugin will be disabled, but the user will be able to elect to install these packages.'''
return
I did not define this method in my plugins, but I still get this output on the terminal:
....
Screen Height:1080
Screen Width:1920
files: ['bingIOTD.py', 'redditEP.py', 'redditLP.py', '__init__.py']
ERROR: THIS PLUGIN SHOULD NOT HAVE LOADED. <--abstract method
I'm not sure why it is ignoring it, am I missing something...? I've done it before with normal classes that aren't dynamically loaded. Any help is appreciated. I understand I can probably make a workaround (make a default return, check for that), but that doesn't seem to be the right way.
If you need more sourcecode my project is on SourceForge here.
In Python3 the metaclass is specified by
class SourceBase(metaclass=abc.ABCMeta):
not
class SourceBase(object):
__metaclass__=abc.ABCMeta
The code is ignoring the abstractmethod decorator because as far as Python3 is concerned, SourceBase is simply a standard class (instance of type) with an attribute name __metaclass__ rather than being an instance of abc.ABCMeta.
I have a number of atomic classes (Components/Mixins, not really sure what to call them) in a library I'm developing, which are meant to be subclassed by applications. This atomicity was created so that applications can only use the features that they need, and combine the components through multiple inheritance.
However, sometimes this atomicity cannot be ensured because some component may depend on another one. For example, imagine I have a component that gives a graphical representation to an object, and another component which uses this graphical representation to perform some collision checking. The first is purely atomic, however the latter requires that the current object already subclassed this graphical representation component, so that its methods are available to it. This is a problem, because we have to somehow tell the users of this library, that in order to use a certain Component, they also have to subclass this other one. We could make this collision component sub class the visual component, but if the user also subclasses this visual component, it wouldn't work because the class is not on the same level (unlike a simple diamond relationship, which is desired), and would give the cryptic meta class errors which are hard to understand for the programmer.
Therefore, I would like to know if there is any cool way, through maybe metaclass redefinition or using class decorators, to mark these unatomic components, and when they are subclassed, the additional dependency would be injected into the current object, if its not yet available. Example:
class AtomicComponent(object):
pass
#depends(AtomicComponent) # <- something like this?
class UnAtomicComponent(object):
pass
class UserClass(UnAtomicComponent): #automatically includes AtomicComponent
pass
class UserClass2(AtomicComponent, UnAtomicComponent): #also works without problem
pass
Can someone give me an hint on how I can do this? or if it is even possible...
edit:
Since it is debatable that the meta class solution is the best one, I'll leave this unaccepted for 2 days.
Other solutions might be to improve error messages, for example, doing something like UserClass2 would give an error saying that UnAtomicComponent already extends this component. This however creates the problem that it is impossible to use two UnAtomicComponents, given that they would subclass object on different levels.
"Metaclasses"
This is what they are for! At time of class creation, the class parameters run through the
metaclass code, where you can check the bases and change then, for example.
This runs without error - though it does not preserve the order of needed classes
marked with the "depends" decorator:
class AutoSubclass(type):
def __new__(metacls, name, bases, dct):
new_bases = set()
for base in bases:
if hasattr(base, "_depends"):
for dependence in base._depends:
if not dependence in bases:
new_bases.add(dependence)
bases = bases + tuple(new_bases)
return type.__new__(metacls, name, bases, dct)
__metaclass__ = AutoSubclass
def depends(*args):
def decorator(cls):
cls._depends = args
return cls
return decorator
class AtomicComponent:
pass
#depends(AtomicComponent) # <- something like this?
class UnAtomicComponent:
pass
class UserClass(UnAtomicComponent): #automatically includes AtomicComponent
pass
class UserClass2(AtomicComponent, UnAtomicComponent): #also works without problem
pass
(I removed inheritance from "object", as I declared a global __metaclass__ variable. All classs will still be new style class and have this metaclass. Inheriting from object or another class does override the global __metaclass__variable, and a class level __metclass__ will have to be declared)
-- edit --
Without metaclasses, the way to go is to have your classes to properly inherit from their dependencies. Tehy will no longer be that "atomic", but, since they could not work being that atomic, it may be no matter.
In the example bellow, classes C and D would be your User classes:
>>> class A(object): pass
...
>>> class B(A, object): pass
...
>>>
>>> class C(B): pass
...
>>> class D(B,A): pass
...
>>>