conditional class inheritance definition in python - python

I have an linux based python application, which make use of pygtk and gtk.
It have both UI execution & command line mode execution option.
In UI mode, to create main application window, class definition is
class ToolWindow(common.Singleton, gtk.Window):
def __init__(self):
gtk.Window.__init__(self,gtk.WINDOW_TOPLEVEL)
What I want to do is, if application is able to import gtk and pygtk, then only
class ToolWindow should inherit both common.Singleton and gtk.Window classes, else it should only inherit common.Singleton class.
What is the best way to do it?

You can specify a metaclass where you can test what modules are importable:
class Meta(type):
def __new__(cls, name, bases, attrs):
try:
import gtk
bases += (gtk.Window)
except ImportError:
# gtk module not available
pass
# Create the class with the new bases tuple
return super(Meta, cls).__new__(cls, name, bases, attrs)
class ToolWindow(common.Singleton):
__metaclass__ = Meta
...
This is just a raw sketch, obviously many improvements can be done, but it should help you get started.
You should be aware that you should change your __init__() method from ToolWindow because it may not have the gtk module available (maybe set a flag in the metaclass to later check if the module is available; or you can even redefine the __init__() method from within the metaclass based on whether the module is available or not -- there are several ways of tackling this).

Related

Django: Custom Metaclass Inheriting From And Extending `ModelBase`

I am trying to do some metaclass hocus-pocus. I want my own Metaclass
to inherit from ModelBase and then I want to add additional logic by
extending its __new__ method. However I think there is something
strange happening with the MRO/inheritance order in the way I'm using it.
Here is the basic situation:
from django.db.models import Model, ModelBase
class CustomMetaclass(ModelBase):
def __new__(cls, name, bases, attrs):
# As I am trying to extend `ModelBase`, I was expecting this
# call to `super` to give me the return value from here:
# https://github.com/django/django/blob/master/django/db/models/base.py#L300
# And that I would be able to access everyhing in `_meta` with
# `clsobj._meta`. But actually this object is
# `MyAbstractModel` and has no `_meta` property so I'm pretty
# sure `__new__` isn't being called on `ModelBase` at all at
# this point.
clsobj = super().__new__(cls, name, bases, attrs)
# Now, I want to have access to the `_meta` property setup by
# `ModelBase` so I can dispatch on the data in there. For
# example, let's do something with the field definitions.
for field in clsobj._meta.get_fields():
do_stuff_with_fields()
return clsobj
class MyAbstractModel(metaclass=CustomMetaclass):
"""This model is abstract because I only want the custom metaclass
logic to apply to those models of my choosing and I don't want to
be able to instantiate it directly. See the class definitions below.
"""
class Meta:
abstract = True
class MyModel(Model):
"""Regular model, will be derived from metaclass `ModelBase` as usual.
"""
pass
class MyCustomisedModel(MyAbstractModel):
"""This model should enjoy the logic defined by our extended `__new__` method.
"""
pass
Any ideas why __new__ on ModelBase isn't being called by
CustomMetaClass? How can I correctly extend ModelBase in this way? I'm pretty sure metaclass inheritance is possible
but seems like I'm missing something...
The way to get a clsobj with the _meta attribute is as simple as:
class CustomMetaclass(ModelBase):
def __new__(cls, name, bases, attrs):
bases = (Model,)
clsobj = super().__new__(cls, name, bases, attrs)
for field in clsobj._meta.get_fields():
do_stuff_with_fields()
return clsobj
And we can do the same thing with MyAbstractModel(Model, metaclass=CustomMetaclass).
But, ultimate success here still depends on the kind of work we intend to do in the __new__ method. If we want to somehow introspect and work with the class's fields using metaprogramming, we need to be aware that we are trying to rewrite the class using __new__ at import time and thus (because this is Django) the app registry is not yet ready and this can cause exceptions to be raised if certain conditions arise (e.g. we are forbidden to access or work with reverse relations). This happens here even when Model is passed into __new__ as a base.
We can half-circumvent some of those problems by using the following non-public call to _get_fields (which Django does itself in certain places):
class CustomMetaclass(ModelBase):
def __new__(cls, name, bases, attrs):
bases = (Model,)
clsobj = super().__new__(cls, name, bases, attrs)
for field in clsobj._meta._get_fields(reverse=False):
do_stuff_with_fields()
return clsobj
But depending on the scenario and what we are trying to achieve we might still hit problems; for example, we won't be able to access any reverse relations using our metaclass. So still no good.
To overcome this restriction we have to leverage signals in the app registry to make our classes as dynamic as we want them to be with full access to _meta.get_fields.
See this ticket: https://code.djangoproject.com/ticket/24231
The main takeaway being: "a Django model class is not something you are permitted to work with outside the context of a prepared app registry."

Dynamically add class variables to classes inheriting mixin class

I've got a mixin class, that adds some functionality to inheriting classes, but the mixin requires some class attributes to be present, for simplicity let's say only one property handlers. So this would be the usage of the mixin:
class Mixin:
pass
class Something(Mixin):
handlers = {}
The mixin can't function without this being defined, but I really don't want to specify the handlers in every class that I want to use the mixin with. So I solved this by writing a metaclass:
class MixinMeta:
def __new__(mcs, *args, **kwargs):
cls = super().__new__(mcs, *args, **kwargs)
cls.handlers = {}
return cls
class Mixin(metaclass=MixinMeta):
pass
And this works exactly how I want it to. But I'm thinking this can become a huge problem, since metaclasses don't work well together (I read various metaclass conflicts can only be solved by creating a new metaclass that resolves those conflicts).
Also, I don't want to make the handlers property a property of the Mixin class itself, since that would mean having to store handlers by their class names inside the Mixin class, complicating the code a bit. I like having each class having their handlers on their own class - it makes working with them simpler, but clearly this has drawbacks.
My question is, what would be a better way to implement this? I'm fairly new to metaclasses, but they seem to solve this problem well. But metaclass conflicts are clearly a huge issue when dealing with complex hierarchies without having to define various metaclasses just to resolve those conflicts.
Your problem is very real, and Python folks have thought of this for Python 3.6 (still unrealsed) on. For now (up to Python 3.5), if your attributes can wait to exist until your classes are first instantiated, you could put cod to create a (class) attribute on the __new__ method of your mixin class itself - thus avoiding the (extra) metaclass:
class Mixin:
def __new__(cls):
if not hasattr(cls, handlers):
cls.handlers = {}
return super().__new__(cls)
For Python 3.6 on, PEP 487 defines a __init_subclass__ special method to go on the mixin class body. This special method is not called for the mixin class itself, but will be called at the end of type.__new__ method (the "root" metaclass) for each class that inherits from your mixin.
class Mixin:
def __init_subclass__(cls, **kwargs):
cls.handlers = {}
return super().__init_subclass__(**kwargs)
As per the PEP's background text, the main motivation for this is exactly what led you to ask your question: avoid the need for meta-classes when simple customization of class creation is needed, in order to reduce the chances of needing different metaclasses in a project, and thus triggering a situation of metaclass conflict.

Plugin architecture - Plugin Manager vs inspecting from plugins import *

I'm currently writing an application which allows the user to extend it via a 'plugin' type architecture. They can write additional python classes based on a BaseClass object I provide, and these are loaded against various application signals. The exact number and names of the classes loaded as plugins is unknown before the application is started, but are only loaded once at startup.
During my research into the best way to tackle this I've come up with two common solutions.
Option 1 - Roll your own using imp, pkgutil, etc.
See for instance, this answer or this one.
Option 2 - Use a plugin manager library
Randomly picking a couple
straight.plugin
yapsy
this approach
My question is - on the proviso that the application must be restarted in order to load new plugins - is there any benefit of the above methods over something inspired from this SO answer and this one such as:
import inspect
import sys
import my_plugins
def predicate(c):
# filter to classes
return inspect.isclass(c)
def load_plugins():
for name, obj in inspect.getmembers(sys.modules['my_plugins'], predicate):
obj.register_signals()
Are there any disadvantages to this approach compared to the ones above? (other than all the plugins must be in the same file) Thanks!
EDIT
Comments request further information... the only additional thing I can think to add is that the plugins use the blinker library to provide signals that they subscribe to. Each plugin may subscribe to different signals of different types and hence must have its own specific "register" method.
Since Python 3.6 a new class method __init_subclass__ is added, that is called on a base class, whenever a new subclass is created.
This method can further simplify the solution offered by will-hart above, by removing the metaclass.
The __init_subclass__ method was introduced with PEP 487: Simpler customization of class creation. The PEP comes with a minimal example for a plugin architecture:
It is now possible to customize subclass creation without using a
metaclass. The new __init_subclass__ classmethod will be called on
the base class whenever a new subclass is created:
class PluginBase:
subclasses = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses.append(cls)
class Plugin1(PluginBase):
pass
class Plugin2(PluginBase):
pass
The PEP example above stores references to the classes in the Plugin.plugins field.
If you want to store instances of the plugin classes, you can use a structure like this:
class Plugin:
"""Base class for all plugins. Singleton instances of subclasses are created automatically and stored in Plugin.plugins class field."""
plugins = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.plugins.append(cls())
class MyPlugin1(Plugin):
def __init__(self):
print("MyPlugin1 instance created")
def do_work(self):
print("Do something")
class MyPlugin2(Plugin):
def __init__(self):
print("MyPlugin2 instance created")
def do_work(self):
print("Do something else")
for plugin in Plugin.plugins:
plugin.do_work()
which outputs:
MyPlugin1 instance created
MyPlugin2 instance created
Do something
Do something else
The metaclass approach is useful for this issue in Python < 3.6 (see #quasoft's answer for Python 3.6+). It is very simple and acts automatically on any imported module. In addition, complex logic can be applied to plugin registration with very little effort. It requires:
The metaclass approach works like the following:
1) A custom PluginMount metaclass is defined which maintains a list of all plugins
2) A Plugin class is defined which sets PluginMount as its metaclass
3) When an object deriving from Plugin - for instance MyPlugin is imported, it triggers the __init__ method on the metaclass. This registers the plugin and performs any application specific logic and event subscription.
Alternatively if you put the PluginMount.__init__ logic in PluginMount.__new__ it is called whenver a new instance of a Plugin derived class is created.
class PluginMount(type):
"""
A plugin mount point derived from:
http://martyalchin.com/2008/jan/10/simple-plugin-framework/
Acts as a metaclass which creates anything inheriting from Plugin
"""
def __init__(cls, name, bases, attrs):
"""Called when a Plugin derived class is imported"""
if not hasattr(cls, 'plugins'):
# Called when the metaclass is first instantiated
cls.plugins = []
else:
# Called when a plugin class is imported
cls.register_plugin(cls)
def register_plugin(cls, plugin):
"""Add the plugin to the plugin list and perform any registration logic"""
# create a plugin instance and store it
# optionally you could just store the plugin class and lazily instantiate
instance = plugin()
# save the plugin reference
cls.plugins.append(instance)
# apply plugin logic - in this case connect the plugin to blinker signals
# this must be defined in the derived class
instance.register_signals()
Then a base plugin class which looks like:
class Plugin(object):
"""A plugin which must provide a register_signals() method"""
__metaclass__ = PluginMount
Finally, an actual plugin class would look like the following:
class MyPlugin(Plugin):
def register_signals(self):
print "Class created and registering signals"
def other_plugin_stuff(self):
print "I can do other plugin stuff"
Plugins can be accessed from any python module that has imported Plugin:
for plugin in Plugin.plugins:
plugin.other_plugin_stuff()
See the full working example
The approach from will-hart was the most useful one to me!
For i needed more control I wrapped the Plugin Base class in a function like:
def get_plugin_base(name='Plugin',
cls=object,
metaclass=PluginMount):
def iter_func(self):
for mod in self._models:
yield mod
bases = not isinstance(cls, tuple) and (cls,) or cls
class_dict = dict(
_models=None,
session=None
)
class_dict['__iter__'] = iter_func
return metaclass(name, bases, class_dict)
and then:
from plugin import get_plugin_base
Plugin = get_plugin_base()
This allows to add additional baseclasses or switching to another metaclass.

Scanning for thread violations with Tkinter

We are just about to finish a very large update to our application which is built with python2.5 and Tkinter and the following error has crept in sadly:
alloc: invalid block: 06807CE7: 1 0 0
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
We've seen this before and it is usually a Tcl Interrupter error caused when a non GUI thread tries to access TK via Tkinter in anyway (TK not being thread safe). The error pops up on application close, after the python interrupter is finished with our code. This error is very hard to reproduce and I'm thinking I will have to scan all threads in the system to see if they access TK when they shouldn't.
I'm looking for a magic python trick to help with this. All Tkinter widgets we use are first subclassed and inherit from out own Widget base class.
With this in mind I'm looking for a way to add the following check to the beginning of every method in the widget sub classes:
import thread
if thread.get_ident() != TKINTER_GUI_THREAD_ID:
assert 0, "Invalid thread accessing Tkinter!"
Decorators as a partial solution comes to mind. I do not want to add decorators manually to each method however. Is there a way I can add the decorator to all methods of a class that inherits from our Widget base class? Or is there a better way to do all this? Or does anyone have more info about this error?
enter code here
I don't know if your approach is good, as I don't know Tkinter.
But here's a sample of how to decorate all class methods using a metaclass.
import functools
# This is the decorator
def my_decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
print 'calling', func.__name__, 'from decorator'
return func(*args, **kwargs)
return wrapper
# This is the metaclass
class DecorateMeta(type):
def __new__(cls, name, bases, attrs):
for key in attrs:
# Skip special methods, e.g. __init__
if not key.startswith('__') and callable(attrs[key]):
attrs[key] = my_decorator(attrs[key])
return super(DecorateMeta, cls).__new__(cls, name, bases, attrs)
# This is a sample class that uses the metaclass
class MyClass(object):
__metaclass__ = DecorateMeta
def __init__(self):
print 'in __init__()'
def test(self):
print 'in test()'
obj = MyClass()
obj.test()
The metaclass overrides the class creation. It loops through all the attributes of the class being created and decorates all callable attributes that have a "regular" name with my_decorator.
I went with a slightly easier method. I used the __getattribute__ method. The code is as follows:
def __getattribute__(self, name):
import ApplicationInfo
import thread, traceback
if ApplicationInfo.main_loop_thread_id != thread.get_ident():
print "Thread GUI violation"
traceback.print_stack()
return object.__getattribute__(self, name)
And sure enough we found one obscure place where we were accessing state from within TK while not being in the main GUI thread.
Although I must admit I need to review my python, feeling noobish looking at your example.

Python: Metaclasses all the way down

I have an esoteric question involving Python metaclasses. I am creating a Python package for web-server-side code that will make it easy to access arbitrary Python classes via client-side proxies. My proxy-generating code needs a catalog of all of the Python classes that I want to include in my API. To create this catalog, I am using the __metaclass__ special attribute to put a hook into the class-creation process. Specifically, all of the classes in the "published" API will subclass a particular base class, PythonDirectPublic, which itself has a __metaclass__ that has been set up to record information about the class creation.
So far so good. Where it gets complicated is that I want my PythonDirectPublic itself to inherit from a third-party class (enthought.traits.api.HasTraits). This third-party class also uses a __metaclass__.
So what's the right way of managing two metaclasses? Should my metaclass be a subclass of Enthought's metaclass? Or should I just invoke Enthought's metaclass inside my metaclass's __new__ method to get the type object that I will return? Or is there some other mystical incantation to use in this particular circumstance?
Should my metaclass be a subclass of Enthought's metaclass?
I believe that is in fact your only choice. If the metaclass of a derived class is not a subclass of the metaclasses of all of its bases then Python will throw a TypeError when you try to create the derived class. So your metaclass for PythonDirectPublic should look something like
class DerivedMetaClass(BaseMetaClass):
def __new__(cls, name, bases, dct):
# Do your custom memory allocation here, if any
# Now let base metaclass do its memory allocation stuff
return BaseMetaClass.__new__(cls, name, bases, dct)
def __init__(cls, name, bases, dct):
# Do your custom initialization here, if any
# This, I assume, is where your catalog creation stuff takes place
# Now let base metaclass do its initialization stuff
super(DerivedMetaClass, cls).__init__(name, bases, dct)
If you don't have access to the definition of the metaclass for your third-party base class, you can replace BaseMetaClass with enthought.traits.api.HasTraits.__metaclass__. It's wordy, but it will work.
Specifically, all of the classes in the "published" API will subclass a particular base class, PythonDirectPublic
Rather than adding another metaclass, you could recursively use the result of PythonDirectPublic.subclasses().

Categories

Resources