Using ABC, PolymorphicModel, django-models gives metaclass conflict - python

So far every other answer on SO answers in the exact same way: construct your metaclasses and then inherit the 'joined' version of those metaclasses, i.e.
class M_A(type): pass
class M_B(type): pass
class A(metaclass=M_A): pass
class B(metaclass=M_B): pass
class M_C(M_A, M_B): pass
class C:(A, B, metaclass=M_C): pass
But I don't know what world these people are living in, where they're constructing your own metaclasses! Obviously, one would be using classes from other libraries and unless you have a perfect handle on meta programming, how are you supposed to know whether you can just override a class's metaclass? (Clearly I do not have a handle on them yet).
My problem is:
class InterfaceToTransactions(ABC):
def account(self):
return None
...
class Category(PolymorphicModel, InterfaceToTransactions):
def account(self):
return self.source_account
...
class Income(TimeStampedModel, InterfaceToTransactions):
def account(self):
return self.destination_account
...
Which of course gives me the error: "metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases"
I've tried many variations of the solution given above, the following does not work, gives the same error.
class InterfaceToTransactionsIntermediaryMeta(type(PolymorphicModel), type(InterfaceToTransactions)):
pass
class Category(PolymorphicModel, InterfaceToTransactions):
__metaclass__ = InterfaceToTransactionsIntermediaryMeta
...
Nor does putting anything inside the class Meta function. I've read every single other SO question on this topic, please don't simply mark it as duplicate.
-------------------Edited 1/8/18 after accepting the solution-------
Oddly enough, if I try to makemigrations with this new configuration (the one I accepted), it starts giving the metaclass error again, but it still works during runtime. If I comment out the metaclass parts then makemigrations and migrate, it will do it successfully, but then I have to put it back in there after migrating every time.

If you are using Python 3, you are trying to use your derived metaclass incorrectly.
And since you get "the same error", and not other possible, more subtle, error, I'd say this is what is happening.
Try just changing to:
class IntermediaryMeta(type(InterfaceToTransactions), type(PolymorphicModel)):
pass
class Category(PolymorphicModel, InterfaceToTransactions, metaclass=IntermediaryMeta):
...
(At least the ABCMeta class is guaranteed to work collaboratively using super, that is enough motive to place the classe it first on the bases )
tuple)
If that yields you new and improved errors, this means that one or both of those classes can't really collaborate properly due to one of several motives. Then, the way to go is to force your inheritance tree that depends on ABCMeta not to do so, since its role is almost aesthetical in a language where everything else is for "consenting adults" like Python.
Unfortunatelly, the way to that is to use varying methods of brute-force, from safe "rewritting everything" to monkey patching ABCMeta and abstractmethod on the place were "InterfaceToTransactions" is defined to simply do nothing.
If you need to get there, and need some help, please post another question.
Sorry - this is actually the major drawbacks of using metaclasses.

Unless django-polymorphic decides to inherit from abc.ABC this is going to be very difficult to achieve. A good solution would be to "manually" create your interface. For instance:
class InterfaceToTransactions:
def account(self):
raise NotImplementedError("Account method must be implemented.")
...
class Category(PolymorphicModel, InterfaceToTransactions):
def account(self):
return self.source_account
...
class Income(TimeStampedModel, InterfaceToTransactions):
def account(self):
return self.destination_account
...

Related

Is it possible to set a Metaclass globally so it applies to all classes created by default?

I get that a metaclass can be substituted for type and define how a newly created class behaves.
ex:
class NoMixedCase(type):
def __new__(cls,clsname,base,clsdict):
for name in clsdict:
if name.lower() != name:
raise TypeError("Bad name.Don't mix case!")
return super().__new__(cls,clsname,base,clsdict)
class Root(metaclass=NoMixedCase):
pass
class B(Root):
def Foo(self): #type error
pass
However, is there a way of setting NoMixedCase globally, so anytime a new class is created it's behavior is defined by NoMixedCase by default, without havining to inherit from Root?
So if you did...
Class B:
def Foo(self):
pass
...it would still check case on method names.
As for your question, no, it it is not ordinarily - and possibly not even some extra-ordinary thng that will work for this - a lot of CPythons inner things are tied to the type class, and hardcoded to it.
What is possible of trying, without crashing the interpretrer right away, would be to write a wrapper for type.__new__ and use ctypes to replace it directly in type.__new__ slot. (Ordinary assignment won't do it). You'd probably still crash things.
So, in real life, if you decide not to go via a linter program with a plug-in and commit hooks as I suggested in the comment above, the way to go is to have a Base class that uses your metaclass, and get everyone in your project to inherit from that Base.

How to tell if a class is abstract in Python 3?

I wrote a metaclass that automatically registers its classes in a dict at runtime. In order for it to work properly, it must be able to ignore abstract classes.
The code works really well in Python 2, but I've run into a wall trying to make it compatible with Python 3.
Here's what the code looks like currently:
def AutoRegister(registry, base_type=ABCMeta):
class _metaclass(base_type):
def __init__(self, what, bases=None, attrs=None):
super(_metaclass, self).__init__(what, bases, attrs)
# Do not register abstract classes.
# Note that we do not use `inspect.isabstract` here, as
# that only detects classes with unimplemented abstract
# methods - which is a valid approach, but not what we
# want here.
# :see: http://stackoverflow.com/a/14410942/
metaclass = attrs.get('__metaclass__')
if not (metaclass and issubclass(metaclass, ABCMeta)):
registry.register(self)
return _metaclass
Usage in Python 2 looks like this:
# Abstract classes; these are not registered.
class BaseWidget(object): __metaclass__ = AutoRegister(widget_registry)
class BaseGizmo(BaseWidget): __metaclass__ = ABCMeta
# Concrete classes; these get registered.
class AlphaWidget(BaseWidget): pass
class BravoGizmo(BaseGizmo): pass
What I can't figure out, though, is how to make this work in Python 3.
How can a metaclass determine if it is initializing an abstract class in Python 3?
PEP3119 describes how the ABCMeta metaclass "marks" abstract methods and creates an __abstractmethods__ frozenset that contains all methods of a class that are still abstract. So, to check if a class cls is abstract, check if cls.__abstractmethods__ is empty or not.
I also found this relevant post on abstract classes useful.
I couldn't shake the feeling as I was posting this question that I was dealing with an XY Problem. As it turns out, that's exactly what was going on.
The real issue here is that the AutoRegister metaclass, as implemented, relies on a flawed understanding of what an abstract class is. Python or not, one of the most important criteria of an abstract class is that it is not instanciable.
In the example posted in the question, BaseWidget and BaseGizmo are instanciable, so they are not abstract.
Aren't we just bifurcating rabbits here?
Well, why was I having so much trouble getting AutoRegister to work in Python 3? Because I was trying to build something whose behavior contradicts the way classes work in Python.
The fact that inspect.isabstract wasn't returning the result I wanted should have been a major red flag: AutoRegister is a warranty-voider.
So what's the real solution then?
First, we have to recognize that BaseWidget and BaseGizmo have no reason to exist. They do not provide enough functionality to be instantiable, nor do they declare abstract methods that describe the functionality that they are missing.
One could argue that they could be used to "categorize" their sub-classes, but a) that's clearly not what's going on in this case, and b) quack.
Instead, we could embrace Python's definition of "abstract":
Modify BaseWidget and BaseGizmo so that they define one or more abstract methods.
If we can't come up with any abstract methods, then can we remove them entirely?
If we can't remove them but also can't make them properly abstract, it might be worthwhile to take a step back and see if there are other ways we might solve this problem.
Modify the definition of AutoRegister so that it uses inspect.isabstract to decide if a class is abstract: see final implementation.
That's cool and all, but what if I can't change the base classes?
Or, if you have to maintain backwards compatibility with existing code (as was the case for me), a decorator is probably easier:
#widget_registry.register
class AlphaWidget(object):
pass
#widget_registry.register
class BravoGizmo(object):
pass

Are Mixin classes abstract base classes

Are Mixin classes abstract base classes? In the example below, the calls to test_base would fail because python wouldn't be able to resolve self.assertEqual for example.
Also, is PyCharm incorrect as flagging Mixin classes like the one below has having unresolved attribute errors?
class TestConverterMixin(object):
def setUp(self):
self.alt_hasher = getattr(hash, self.converter.__class__.__name__)
def test_base(self):
with self.settings(PASSWORD_HASHERS=[self.hasher, ]):
load_hashers(settings.PASSWORD_HASHERS)
for password in PASSWORDS:
orig = self.alt_hasher.encrypt(password)
conv = self.converter.from_orig(orig)
# see if we get a working hash:
self.assertTrue(check_password(password, conv))
# convert back and test with passlib:
back = self.converter.to_orig(conv)
self.assertEqual(orig, back)
Are Mixin classes AbstractBaseClasses? The most accurate answer for your case is no but it probably should be.
Your class as a stand-alone cannot survive for you reasons you pointed out. By making it an ABC you explicitly tell anyone looking at your class (like pycharm) that
from abc import ABCMeta, abstractmethod
class TestConverterMixin(object):
__metaclass__ = ABCMeta
#abstractmethod
def assertEqual(self, other):
"Need concrete implementation somewhere"
.... the rest of your code
The problem is that you would need this for all of the other methods (self.AssertTrue, self.converter etc). You could have something else in mind but this seriously looks like just a subclass of unittest.TestCase to me.
Oh and was PyCharm wrong. No, they got it right. If you made this an ABC or a subclass of TestCase, they would not have complained. If you used Interfaces, like zope.Interface, pycharm and the like usually get that wrong since they don't understand the registration and lookup process.(it is outside the python core)
I kept having troubles to get PyCharm to not complain about unresolved attribute reference errors on mixin classes. In particular, I also had mixin classes depending on other mixin classes for which I couldn't make one inherit from the other. But then I found this almost perfect way to make PyCharm 2017.1 happy:
class Human:
def is_male(self):
return True
class BeardMixin:
_facial_hair = {'length': 7, 'color': 'brown'}
def has_beard(self):
return True
class BeardLengthMixin:
"""Mixin for class Human with BeardMixin to provide get_beard_length()"""
def get_beard_length(self):
assert isinstance(self, (Human, BeardMixin))
# PyCharm will now not complain about any of these 3 attributes
if self.is_male() and self.has_beard():
return self._facial_hair['length']
The assert statement gives PyCharm the necessary information about which types self could be. There's a drawback though: The assert statement itself does not do what you think it does: It only checks that self is of either type, not whether it's of both. Unfortunately, using two assert statements doesn't work, because the second overrides the first one as far as PyCharm's type deduction is concerned.

Python2.7: infinite loop when super __init__ creates an instance of it's own subclass

I have the sense that this must be kind of a dumb question—nub here. So I'm open to an answer of the sort "This is ass-backwards, don't do it, please try this: [proper way]".
I'm using Python 2.7.5.
General Form of the Problem
This causes an infinite loop unless Thesaurus (an app-wide singleton) does not call Baseclass.__init__()
class Baseclass():
def __init__(self):
thes = Thesaurus()
#do stuff
class Thesaurus(Baseclass):
def __init__(self):
Baseclass.__init__(self)
#do stuff
My Specific Case
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface). This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
But since the subclass calls the superclass's __init__(), which in turn creates another subclass, loops ensue. Not calling the superclass's __init__() works just fine, but I'm concerned that's merely a lucky coincidence, and that my Thesaurus class may eventually be modified to require it's parent __init__().
Advice?
Well, I'm stopping to look at your code, and I'll just base my answer on what you say:
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface).
this would be ThesaurusBase in the code below
This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
That would be ThesaurusSingleton, that you can call with a better name and make it actually useful.
class ThesaurusBase():
def __init__(self, singleton=None):
self.singleton = singleton
def mymethod1(self):
raise NotImplementedError
def mymethod2(self):
raise NotImplementedError
class ThesaurusSingleton(ThesaurusBase):
def mymethod1(self):
return "meaw!"
class Thesaurus(TheraususBase):
def __init__(self, singleton=None):
TheraususBase.__init__(self, singleton)
def mymethod1(self):
return "quack!"
def mymethod2(self):
return "\\_o<"
now you can create your objects as follows:
singleton = ThesaurusSingleton()
thesaurus = Thesaurus(singleton)
edit:
Basically, what I've done here is build a "Base" class that is just an interface defining an expected behavior for all its children classes. The class ThesaurusSingleton (I know that's a terrible name) is also implementing that interface, because you said it had too and I did not want to discuss your design, you may always have good reasons for weird constraints.
And finally, do you really need to instantiate your singleton inside the class that is defining the singleton object? Though there may be some hackish way to do so, there's often a better design that avoids the "hackish" part.
What I think is that however you create your singleton, you should better do it explicitly. That's in the "Zen of python": explicit is better than implicit. Why? because then people reading your code (and that might be you in six months) will be able to understand what's happening and what you were thinking when you wrote that code. If you try to make things more implicit (like using sophisticated meta classes and weird self-inheritance) you may wonder what this code does in less than three weeks!
I'm not telling to avoid that kind of options, but to only use sophisticated stuff when you're out of simple ones!
Based on what you said I think the solution I gave can be a starting point. But as you focus on some obscure, yet not very useful hackish stuff instead of talking about your design, I can't be sure if my example is that appropriate, and hint you on the design.
edit2:
There's an another way to achieve what you say you want (but be sure that's really the design you want). You may want to use a class method that will act on the class itself (instead of the instances) and thus enable you to store a class-wide instance of itself:
>>> class ThesaurusBase:
... #classmethod
... def initClassWide(cls):
... cls._shared = cls()
...
>>> class T(ThesaurusBase):
... def foo(self):
... print self._shared
...
>>> ThesaurusBase.initClassWide()
>>> t = T()
>>> t.foo()
<__main__.ThesaurusBase instance at 0x7ff299a7def0>
and you can call the initClassWide method at the module level of where you declare ThesaurusBase, so whenever you import that module, it will have the singleton loaded (the import mechanism ensuring that python modules are run only once).
the short answer is:
do not instantiate an instance of a sub class from the super class constructor
longer answer:
if the motive you have to try to do this is the fact the Thesaurus is a singleton then you'll be better off exposing the singleton using a static method in the class (Thesaurus) and calling this method when you need the singleton

Dynamic sub-classing in Python

I have a number of atomic classes (Components/Mixins, not really sure what to call them) in a library I'm developing, which are meant to be subclassed by applications. This atomicity was created so that applications can only use the features that they need, and combine the components through multiple inheritance.
However, sometimes this atomicity cannot be ensured because some component may depend on another one. For example, imagine I have a component that gives a graphical representation to an object, and another component which uses this graphical representation to perform some collision checking. The first is purely atomic, however the latter requires that the current object already subclassed this graphical representation component, so that its methods are available to it. This is a problem, because we have to somehow tell the users of this library, that in order to use a certain Component, they also have to subclass this other one. We could make this collision component sub class the visual component, but if the user also subclasses this visual component, it wouldn't work because the class is not on the same level (unlike a simple diamond relationship, which is desired), and would give the cryptic meta class errors which are hard to understand for the programmer.
Therefore, I would like to know if there is any cool way, through maybe metaclass redefinition or using class decorators, to mark these unatomic components, and when they are subclassed, the additional dependency would be injected into the current object, if its not yet available. Example:
class AtomicComponent(object):
pass
#depends(AtomicComponent) # <- something like this?
class UnAtomicComponent(object):
pass
class UserClass(UnAtomicComponent): #automatically includes AtomicComponent
pass
class UserClass2(AtomicComponent, UnAtomicComponent): #also works without problem
pass
Can someone give me an hint on how I can do this? or if it is even possible...
edit:
Since it is debatable that the meta class solution is the best one, I'll leave this unaccepted for 2 days.
Other solutions might be to improve error messages, for example, doing something like UserClass2 would give an error saying that UnAtomicComponent already extends this component. This however creates the problem that it is impossible to use two UnAtomicComponents, given that they would subclass object on different levels.
"Metaclasses"
This is what they are for! At time of class creation, the class parameters run through the
metaclass code, where you can check the bases and change then, for example.
This runs without error - though it does not preserve the order of needed classes
marked with the "depends" decorator:
class AutoSubclass(type):
def __new__(metacls, name, bases, dct):
new_bases = set()
for base in bases:
if hasattr(base, "_depends"):
for dependence in base._depends:
if not dependence in bases:
new_bases.add(dependence)
bases = bases + tuple(new_bases)
return type.__new__(metacls, name, bases, dct)
__metaclass__ = AutoSubclass
def depends(*args):
def decorator(cls):
cls._depends = args
return cls
return decorator
class AtomicComponent:
pass
#depends(AtomicComponent) # <- something like this?
class UnAtomicComponent:
pass
class UserClass(UnAtomicComponent): #automatically includes AtomicComponent
pass
class UserClass2(AtomicComponent, UnAtomicComponent): #also works without problem
pass
(I removed inheritance from "object", as I declared a global __metaclass__ variable. All classs will still be new style class and have this metaclass. Inheriting from object or another class does override the global __metaclass__variable, and a class level __metclass__ will have to be declared)
-- edit --
Without metaclasses, the way to go is to have your classes to properly inherit from their dependencies. Tehy will no longer be that "atomic", but, since they could not work being that atomic, it may be no matter.
In the example bellow, classes C and D would be your User classes:
>>> class A(object): pass
...
>>> class B(A, object): pass
...
>>>
>>> class C(B): pass
...
>>> class D(B,A): pass
...
>>>

Categories

Resources