I can't understand why the following code behaves a particular way, which is described below:
from abc import ABCMeta
class PackageClass(object):
__metaclass__ = ABCMeta
class MyClass1(PackageClass):
pass
MyClass2 = type('MyClass2', (PackageClass, ), {})
print MyClass1
print MyClass2
>>> <class '__main__.MyClass1'>
>>> <class 'abc.MyClass2'>
Why does repr(MyClass2) says abc.MyClass2 (which is by the way not true)?
Thank you!
The problem stems from the fact that ABCMeta overrides __new__ and calls its superclass constructor (type()) there. type() derives the __module__ for the new class from its calling context1; in this case, the type call appears to come from the abc module. Hence, the new class has __module__ set to abc (since type() has no way of knowing that the actual class construction took place in __main__).
The easy way around is to just set __module__ yourself after creating the type:
MyClass2 = type('MyClass2', (PackageClass, ), {})
MyClass2.__module__ = __name__
I would also recommend filing a bug report.
Related: Base metaclass overriding __new__ generates classes with a wrong __module__, Weird inheritance with metaclasses
1: type is a type object defined in C. Its new method uses the current global __name__ as the __module__, unless it calls a metaclass constructor.
Related
Class objects have a __bases__ (and a __base__) attribute:
>>> class Foo(object):
... pass
...
>>> Foo.__bases__
(<class 'object'>,)
Sadly, these attributes aren't accessible in the class body, which would be very convenient for accessing parent class attributes without having to hard-code the name:
class Foo:
cls_attr = 3
class Bar(Foo):
cls_attr = __base__.cls_attr + 2
# throws NameError: name '__base__' is not defined
Is there a reason why __bases__ and __base__ can't be accessed in the class body?
(To be clear, I'm asking if this is a conscious design decision. I'm not asking about the implementation; I know that __bases__ is a descriptor in type and that this descriptor can't be accessed until a class object has been created. I want to know why python doesn't create __bases__ as a local variable in the class body.)
I want to know why python doesn't create __bases__ as a local variable in the class body
As you know, class is mostly a shortcut for type.__new__() - when the runtime hits a class statements, it executes all statements at the top-level of the class body, collects all resulting bindings in a dedicated namespace dict, calls type() with the concrete metaclass, the class name, the base classes and the namespace dict, and binds the resulting class object to the class name in the enclosing scope (usually but not necessarily the module's top-level namespace).
The important point here is that it's the metaclass responsabilty to build the class object, and to allow for class object creation customisations, the metaclass must be free to do whatever it wants with its arguments. Most often a custom metaclass will mainly work on the attrs dict, but it must also be able to mess with the bases argument. Now since the metaclass is only invoked AFTER the class body statements have been executed, there's no way the runtime can reliably expose the bases in the class body scope since those bases could be modified afterward by the metaclass.
There are also some more philosophical considerations here, notably wrt/ explicit vs implicit, and as shx2 mentions, Python designers try to avoid magic variables popping out of the blue. There are indeed a couple implementation variables (__module__ and, in py3, __qualname__) that are "automagically" defined in the class body namespace, but those are just names, mostly intended as additional debugging / inspection informations for developers) and have absolutely no impact on the class object creation nor on its properties, behaviour and whatnots.
As always with Python, you have to consider the whole context (the execution model, the object model, how the different parts work together etc) to really understand the design choices. Whether you agree with the whole design and philosophy is another debate (and one that doesn't belong here), but you can be sure that yes, those choices are "conscious design decisions".
I am not answering as to why it was decided to be implemented the way it was, I'm answering why it wasn't implemented as a "local variable in the class body":
Simply because nothing in python is a local variable magically defined in the class body. Python doesn't like names magically appearing out of nowhere.
It's because it's simply is not yet created.
Consider the following:
>>> class baselessmeta(type):
... def __new__(metaclass, class_name, bases, classdict):
... return type.__new__(
... metaclass,
... class_name,
... (), # I can just ignore all the base
... {}
... )
...
>>> class Baseless(int, metaclass=baselessmeta):
... # imaginary print(__bases__, __base__)
... ...
...
>>> Baseless.__bases__
(<class 'object'>,)
>>> Baseless.__base__
<class 'object'>
>>>
What should the imaginary print result in?
Every python class is created via the type metaclass one way or another.
You have the int argument for the type() in bases argument, yet you do not know what is the return value is going to be. You may use that directly as a base in your metaclass, or you may return another base with your LOC.
Just realized your to be clear part and now my answer is useless haha. Oh welp.
I wrote a class in python which inherits from type . I thought that this was the only requirement for a class so as to be called as a metaclass but had not defined a __new__ method for it. But on instantiating with this new class as the metaclass I got an error stating the below :
TypeError: type.__new__() takes exactly 3 arguments (0 given)
The following is my code :
class a(type) :
pass
c= a()
Now when the class statement is being processed , that the __new__ method of type is being called is my assumption. This is because the default metaclass of all classes in python is type .
Now when I am instantiating the class a , which I have assumed to be a metaclass under the assumption that any class inheriting from (type) is a metaclass , isn't it the same as creating a class ? Why should this not result in type.__new__ being called with correct arguments ?
This does not work:
class a(type) :
pass
c = a()
...for the same reason for which this does not work:
c = type()
In the end, both do the same.
To use it as a metaclass, do this:
>>> class Class(metaclass=a):
... pass
...
>>> Class
<class '__main__.Class'>
>>> type(Class)
<class '__main__.a'>
You could also instantiate the class directly, as you tried, but you have to provide the correct arguments:
AnotherClass = type('AnotherClass', (), {})
YetAnotherClass = a('YetAnotherClass', (), {})
This error is due to you not respecting type's signature.
Inheriting from type is indeed enough for a class to be used as a metaclass, but the thing is you actually have to use it as a metaclass.
type itself has "two working modes: if called with 3 positional arguments, it creates a new class. And then type is the metaclass of that class. If called with 1 positional argument, it creates no new class or object at all - instead, it just returns that object's class.
But it makes no sense calling type with no arguments at all. And the arguments in the modes above are not optional. So, you will get a TypeError if your try to call type with no arguments at all - and that is not a "TypeError because something went wrong with the type class" - it is a "TypeError because your call did not match the callable signature".
When you inherit from type and change nothing, you class will behave exactly the same as the original type: you can call it with either one or three positional arguments, and the code responsible for working in either mode lies in type.__new__.
Now, if you want to use your class as a metaclass, you can indeed call it, but in the three argument form: you ass it the new class name, its bases and its attributes - which can actually be all empty, but you have to pass a string, a tuple and a dictionary as these three arguments:
class A(type): pass
myclass = A("", (), {})
And now, A is working as the metaclass for myclass:
In [16]: type(myclass)
Out[16]: __main__.A
However, whenever one defines a metaclass it is more usual to use it with the metaclass= named argument when declaring a class body:
In [17]: class MyOtherClass(metaclass=A):
...: pass
...:
In [18]: type(MyOtherClass)
Out[18]: __main__.A
Python's runtime will then compile this class body, and when the bytecod for it is executed, it will make the call to your metaclass' __new__ (and then __init__, and before that its __prepare__) method, so that it works as a metaclass.
So, just in case it is not clear: when you derive a class from type intending to use it as a metaclass, there is no need to further instantiate it to say that "it is now a metaclass". A subclass of type already can be a metaclass, and its instances will be classes, that will have it as a metaclass.
I have a class object, cls. I want to know its metaclass. How do I do this?
(If I wanted to know its parent classes, I would do cls.__mro__. Is there something like this to get the metaclass?)
Ok - so, a class's metaclass is just its own "type", and can be given by
type(cls) and other means such as cls.__class__.
In Python 3.x there are no further ambiguities - as the syntax for creating a metaclass just passes it as a named parameter on the class declaration statement anyway.
However, the syntax used for creating a metaclass in Python 2.x generates a side-effect that is worth noting.
Upon doing
class A(object):
__metaclass__ = MyMeta
The __metaclass__ attribute is set to that value in the actual class, even if the actual metaclass is another one.
Consider:
def class_pre_decorator(name, bases, namespace):
# do something with namespace
return type(name, bases, namespace)
This is a callable that can be used in the metaclass declaration of both Python 2 and 3 - and it is valid. After resolving, the actual metaclass in both cases will simply be type. However, in Python 2.x, cls.__metaclass__ will point to the callable class_pre_decorator, even tough type(cls) returns type, which is the correct metaclass.(Note that using callables in this way, they will not be used agian when the class is further subclassed)
There is no way in Python 3 to guess the callable actually used to instantiate a class if it gives no other hint (like setting an attribute on the class) that it was used:
# python 2
class A(object):
__metaclass__ = class_pre_decorator
On the console:
In [8]: type(A)
Out[8]: type
In [9]: A.__metaclass__
Out[9]: <unbound method A.class_pre_decorator>
and
# Python 3
class A(metaclass=class_pre_decorator):
pass
And trying to read A.__metaclass__ will simply raise an AttributeError.
What is the purpose of checking self.__class__ ? I've found some code that creates an abstract interface class and then checks whether its self.__class__ is itself, e.g.
class abstract1 (object):
def __init__(self):
if self.__class__ == abstract1:
raise NotImplementedError("Interfaces can't be instantiated")
What is the purpose of that?
Is it to check whether the class is a type of itself?
The code is from NLTK's http://nltk.googlecode.com/svn/trunk/doc/api/nltk.probability-pysrc.html#ProbDistI
self.__class__ is a reference to the type of the current instance.
For instances of abstract1, that'd be the abstract1 class itself, which is what you don't want with an abstract class. Abstract classes are only meant to be subclassed, not to create instances directly:
>>> abstract1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __init__
NotImplementedError: Interfaces can't be instantiated
For an instance of a subclass of abstract1, self.__class__ would be a reference to the specific subclass:
>>> class Foo(abstract1): pass
...
>>> f = Foo()
>>> f.__class__
<class '__main__.Foo'>
>>> f.__class__ is Foo
True
Throwing an exception here is like using an assert statement elsewhere in your code, it protects you from making silly mistakes.
Note that the pythonic way to test for the type of an instance is to use the type() function instead, together with an identity test with the is operator:
class abstract1(object):
def __init__(self):
if type(self) is abstract1:
raise NotImplementedError("Interfaces can't be instantiated")
type() should be preferred over self.__class__ because the latter can be shadowed by a class attribute.
There is little point in using an equality test here as for custom classes, __eq__ is basically implemented as an identity test anyway.
Python also includes a standard library to define abstract base classes, called abc. It lets you mark methods and properties as abstract and will refuse to create instances of any subclass that has not yet re-defined those names.
The code that you posted there is a no-op; self.__class__ == c1 is not part of a conditional so the boolean is evaluated but nothing is done with the result.
You could try to make an abstract base class that checks to see if self.__class__ is equal to the abstract class as opposed to a hypothetical child (via an if statement), in order to prevent the instantiation of the abstract base class itself due to developer mistake.
What is the purpose of that? Is it to check whether the class is a type of itself?
Yes, if you try to construct an object of type Abstract1 it'll throw that exception telling you that you're not allowed to do so.
In Python 3 either using type() to check for the type or __class__ will return the same result.
class C:pass
ci=C()
print(type(ci)) #<class '__main__.C'>
print(ci.__class__) #<class '__main__.C'>
I recently checked the implementation for the #dataclass decorator (Raymond Hettinger is directly involved into that project), and they are using __class__ to refer to the type.
So it is not wrong to use __class__ :)
The clues are in the name of the class, "abstract1", and in the error. This is intended to be an abstract class, meaning one that is intended to be subclassed. Each subclass will provide its own behaviour. The abstract class itself serves to document the interface, i.e. the methods and arguments that classes implementing the interface are expected to have. It is not meant to be instantiated itself, and the test is used to tell whether we are in the class itself or a subclass.
See the section on Abstract Classes in this article by Julien Danjou.
I am trying to learn inheritance in python. I write class "Course" as super class of class "AdvacedCourse" as shown below.
class Course(object):
def __init__(self, crsName ="python", duration=45):
self.crsName = crsName
self.duration = 25
And the sub class is:
import Course
class AdvancedCourse (Course):
def __init__(self, crsName ="python", duration=45):
self.crsName = "java"
self.duration = 25
But I got stuck on an error:
class AdvancedCourse (Course):
TypeError: module.__init__() takes at most 2 arguments (3 given)
Any suggestions?
This is a problem with importing, not inheritance. Course is the module: you need to inherit from Course.Course. (In Python we usually name modules in lower case, though).
I assume that class Course is in another module Course.py.
Then you should import it with from Course import Course. And #Daniel is right - you should have module in file course.py (lowercase) and import statement will be from course import Course.
Note: I've only made this an answer because ElmoVanKielmo suggested it. It definitely shouldn't be the accepted answer, as it will only be confusing to novices… but maybe it will be interesting to others.
As Daniel Roseman's answer explains, import Course means that Course is a module, and Course.Course is the class you want.
So, what happens when you try to inherit from a module?
In Python, classes are objects, just like anything else. A class's type (which you can see by printing out type(AdvancedCourse)) is usually type, but you can specify a different type by setting a metaclass. When you inherit from a superclass, if you don't specify a metaclass, you get your superclass's metaclass. So, when you do this:
import Course
class AdvancedCourse(Course):
… you're saying that your metaclass is type(Course)—that is, module.*
Just as creating an instance means a call to the class's __init__, creating a class means a call to the metaclass's __init__. The arguments, besides self (which is a class here, not an instance, of course, and therefore usually named cls instead of self) are the class name, the list of base classes, and the dictionary of methods and other attributes. So, this definition:
class AdvancedCourse(Course):
pass
… tries to initialize a module object by calling module.__init__(cls, 'AdvancedCourse', (Course,), {}).**
Of course modules are also just objects, so they have an __init__ too, but their arguments are just self, name and docstring. So, you're passing one too many arguments.
You're actually just getting lucky here; if module and type happened to take the same number of arguments in their constructor, you'd end up with something very weird that acted sort of like a class in some ways, but not others, causing all kinds of subtle bugs.
If you want to play with this in the interactive interpreter, try this:
>>> class meta(type):
... def __init__(self, name, bases, d):
... print('{}\n{}\n{}\n{}\n'.format(self, name, bases, d))
... super(meta, self).__init__(name, bases, d)
>>> class Silly(metaclass=meta):
... __metaclass__=meta
<class '__main__.Silly'>
Silly
()
{'__module__': '__main__', '__qualname__': 'Silly', '__metaclass__': <class '__main__.meta'>}
>>> class Sillier(Silly):
... pass
<class '__main__.Sillier'>
Sillier
(<class '__main__.Silly'>,)
{'__module__': '__main__', '__qualname__': 'Sillier'}
In Python 2.x, you don't want the metaclass=meta in the class header; you can just put object there. In 3.x, you don't want the __metaclass__=meta in the body; you can just put pass there.
* module is one of those "hidden" types not accessible by name in builtins, but you can get at it as types.ModuleType, or just by importing something and using type(Course).
** Actually, even an empty class has a few members in its dictionary. For example, in Python 3.3, there's always at least a __module__ attribute and a __qualname__ attribute.