TypeError Inheritance in Python - python

I am trying to learn inheritance in python. I write class "Course" as super class of class "AdvacedCourse" as shown below.
class Course(object):
def __init__(self, crsName ="python", duration=45):
self.crsName = crsName
self.duration = 25
And the sub class is:
import Course
class AdvancedCourse (Course):
def __init__(self, crsName ="python", duration=45):
self.crsName = "java"
self.duration = 25
But I got stuck on an error:
class AdvancedCourse (Course):
TypeError: module.__init__() takes at most 2 arguments (3 given)
Any suggestions?

This is a problem with importing, not inheritance. Course is the module: you need to inherit from Course.Course. (In Python we usually name modules in lower case, though).

I assume that class Course is in another module Course.py.
Then you should import it with from Course import Course. And #Daniel is right - you should have module in file course.py (lowercase) and import statement will be from course import Course.

Note: I've only made this an answer because ElmoVanKielmo suggested it. It definitely shouldn't be the accepted answer, as it will only be confusing to novices… but maybe it will be interesting to others.
As Daniel Roseman's answer explains, import Course means that Course is a module, and Course.Course is the class you want.
So, what happens when you try to inherit from a module?
In Python, classes are objects, just like anything else. A class's type (which you can see by printing out type(AdvancedCourse)) is usually type, but you can specify a different type by setting a metaclass. When you inherit from a superclass, if you don't specify a metaclass, you get your superclass's metaclass. So, when you do this:
import Course
class AdvancedCourse(Course):
… you're saying that your metaclass is type(Course)—that is, module.*
Just as creating an instance means a call to the class's __init__, creating a class means a call to the metaclass's __init__. The arguments, besides self (which is a class here, not an instance, of course, and therefore usually named cls instead of self) are the class name, the list of base classes, and the dictionary of methods and other attributes. So, this definition:
class AdvancedCourse(Course):
pass
… tries to initialize a module object by calling module.__init__(cls, 'AdvancedCourse', (Course,), {}).**
Of course modules are also just objects, so they have an __init__ too, but their arguments are just self, name and docstring. So, you're passing one too many arguments.
You're actually just getting lucky here; if module and type happened to take the same number of arguments in their constructor, you'd end up with something very weird that acted sort of like a class in some ways, but not others, causing all kinds of subtle bugs.
If you want to play with this in the interactive interpreter, try this:
>>> class meta(type):
... def __init__(self, name, bases, d):
... print('{}\n{}\n{}\n{}\n'.format(self, name, bases, d))
... super(meta, self).__init__(name, bases, d)
>>> class Silly(metaclass=meta):
... __metaclass__=meta
<class '__main__.Silly'>
Silly
()
{'__module__': '__main__', '__qualname__': 'Silly', '__metaclass__': <class '__main__.meta'>}
>>> class Sillier(Silly):
... pass
<class '__main__.Sillier'>
Sillier
(<class '__main__.Silly'>,)
{'__module__': '__main__', '__qualname__': 'Sillier'}
In Python 2.x, you don't want the metaclass=meta in the class header; you can just put object there. In 3.x, you don't want the __metaclass__=meta in the body; you can just put pass there.
* module is one of those "hidden" types not accessible by name in builtins, but you can get at it as types.ModuleType, or just by importing something and using type(Course).
** Actually, even an empty class has a few members in its dictionary. For example, in Python 3.3, there's always at least a __module__ attribute and a __qualname__ attribute.

Related

Is it safe just use __class__ in a python class definition

I understand __class__ can be used to get the class of an object, it also can be used to get current class in a class definition. My question is, in a python class definition, is it safe just use __class__, rather than self.__class__?
#!/usr/bin/python3
class foo:
def show_class():
print(__class__)
def show_class_self(self):
print(self.__class__)
if __name__ == '__main__':
x = foo()
x.show_class_self()
foo.show_class()
./foo.py
<class '__main__.foo'>
<class '__main__.foo'>
As the codes above demonstrated, at least in Python3, __class__ can be used to get the current class, in the method show_class, without the present of "self". Is it safe? Will it cause problems in some special situations? (I can think none of it right now).
__class__ is lexically scoped, whereas some_object.__class__ is dynamically dispatched. So the two can different when the lexical scope is different from the of the receiver, like if lambdas are involved:
#!/usr/bin/env python3
class ClassA:
def print_callback(self, callback):
print(callback(self))
class ClassB:
def test(self):
ClassA().print_callback(lambda o: o.__class__) # <class '__main__.ClassA'>
ClassA().print_callback(lambda _: __class__) # <class '__main__.ClassB'>
ClassB().test()
It depends on what you're trying to achieve. Do you want to know which class's source code region you find yourself in, or the class of a particular object?
And I think it goes without saying, but I'll mention it explicitly: don't rely on the attribute directly, use the type function. I.e. prefer type(o) over o.__class__.
That is documented in the datamodel, so I believe it is safe/reliable.
From 3.3.3.5. Executing the class body:
Class variables must be accessed through the first parameter of instance or class methods, or through the implicit lexically scoped __class__ reference described in the next section.
From 3.3.3.6. Creating the class object:
__class__ is an implicit closure reference created by the compiler if any methods in a class body refer to either __class__ or super
It is true that the docs mention any methods, your foo.show_class is a function but perhaps not convincingly a method. However PEP 3135, which added this reference, is worded differently:
Every function will have a cell named __class__ that contains the class object that the function is defined in.
...
For functions defined outside a class body, __class__ is not defined, and will result in runtime SystemError.

Why isn't __bases__ accessible in the class body?

Class objects have a __bases__ (and a __base__) attribute:
>>> class Foo(object):
... pass
...
>>> Foo.__bases__
(<class 'object'>,)
Sadly, these attributes aren't accessible in the class body, which would be very convenient for accessing parent class attributes without having to hard-code the name:
class Foo:
cls_attr = 3
class Bar(Foo):
cls_attr = __base__.cls_attr + 2
# throws NameError: name '__base__' is not defined
Is there a reason why __bases__ and __base__ can't be accessed in the class body?
(To be clear, I'm asking if this is a conscious design decision. I'm not asking about the implementation; I know that __bases__ is a descriptor in type and that this descriptor can't be accessed until a class object has been created. I want to know why python doesn't create __bases__ as a local variable in the class body.)
I want to know why python doesn't create __bases__ as a local variable in the class body
As you know, class is mostly a shortcut for type.__new__() - when the runtime hits a class statements, it executes all statements at the top-level of the class body, collects all resulting bindings in a dedicated namespace dict, calls type() with the concrete metaclass, the class name, the base classes and the namespace dict, and binds the resulting class object to the class name in the enclosing scope (usually but not necessarily the module's top-level namespace).
The important point here is that it's the metaclass responsabilty to build the class object, and to allow for class object creation customisations, the metaclass must be free to do whatever it wants with its arguments. Most often a custom metaclass will mainly work on the attrs dict, but it must also be able to mess with the bases argument. Now since the metaclass is only invoked AFTER the class body statements have been executed, there's no way the runtime can reliably expose the bases in the class body scope since those bases could be modified afterward by the metaclass.
There are also some more philosophical considerations here, notably wrt/ explicit vs implicit, and as shx2 mentions, Python designers try to avoid magic variables popping out of the blue. There are indeed a couple implementation variables (__module__ and, in py3, __qualname__) that are "automagically" defined in the class body namespace, but those are just names, mostly intended as additional debugging / inspection informations for developers) and have absolutely no impact on the class object creation nor on its properties, behaviour and whatnots.
As always with Python, you have to consider the whole context (the execution model, the object model, how the different parts work together etc) to really understand the design choices. Whether you agree with the whole design and philosophy is another debate (and one that doesn't belong here), but you can be sure that yes, those choices are "conscious design decisions".
I am not answering as to why it was decided to be implemented the way it was, I'm answering why it wasn't implemented as a "local variable in the class body":
Simply because nothing in python is a local variable magically defined in the class body. Python doesn't like names magically appearing out of nowhere.
It's because it's simply is not yet created.
Consider the following:
>>> class baselessmeta(type):
... def __new__(metaclass, class_name, bases, classdict):
... return type.__new__(
... metaclass,
... class_name,
... (), # I can just ignore all the base
... {}
... )
...
>>> class Baseless(int, metaclass=baselessmeta):
... # imaginary print(__bases__, __base__)
... ...
...
>>> Baseless.__bases__
(<class 'object'>,)
>>> Baseless.__base__
<class 'object'>
>>>
What should the imaginary print result in?
Every python class is created via the type metaclass one way or another.
You have the int argument for the type() in bases argument, yet you do not know what is the return value is going to be. You may use that directly as a base in your metaclass, or you may return another base with your LOC.
Just realized your to be clear part and now my answer is useless haha. Oh welp.

How to determine the metaclass of a class?

I have a class object, cls. I want to know its metaclass. How do I do this?
(If I wanted to know its parent classes, I would do cls.__mro__. Is there something like this to get the metaclass?)
Ok - so, a class's metaclass is just its own "type", and can be given by
type(cls) and other means such as cls.__class__.
In Python 3.x there are no further ambiguities - as the syntax for creating a metaclass just passes it as a named parameter on the class declaration statement anyway.
However, the syntax used for creating a metaclass in Python 2.x generates a side-effect that is worth noting.
Upon doing
class A(object):
__metaclass__ = MyMeta
The __metaclass__ attribute is set to that value in the actual class, even if the actual metaclass is another one.
Consider:
def class_pre_decorator(name, bases, namespace):
# do something with namespace
return type(name, bases, namespace)
This is a callable that can be used in the metaclass declaration of both Python 2 and 3 - and it is valid. After resolving, the actual metaclass in both cases will simply be type. However, in Python 2.x, cls.__metaclass__ will point to the callable class_pre_decorator, even tough type(cls) returns type, which is the correct metaclass.(Note that using callables in this way, they will not be used agian when the class is further subclassed)
There is no way in Python 3 to guess the callable actually used to instantiate a class if it gives no other hint (like setting an attribute on the class) that it was used:
# python 2
class A(object):
__metaclass__ = class_pre_decorator
On the console:
In [8]: type(A)
Out[8]: type
In [9]: A.__metaclass__
Out[9]: <unbound method A.class_pre_decorator>
and
# Python 3
class A(metaclass=class_pre_decorator):
pass
And trying to read A.__metaclass__ will simply raise an AttributeError.

Calling the parent class' __init__ right away when subclassing

I was reading the answers to Usage of __slots__? and I noticed that one of the examples is :
from collections import namedtuple
class MyNT(namedtuple('MyNT', 'bar baz')):
"""MyNT is an immutable and lightweight object"""
__slots__ = ()
I saw that the __init__ of namedtuple was called when it was being subclassed by MyNT
I went ahead and tested it for myself and made this code which is my first attempt to understand such behavior:
class objectP():
def __init__(self,name):
print('object P inited')
class p(objectP('asd')):
pass
I got an error stating that 4 objects were "passed" so I changed it to
class objectP():
def __init__(self,*a):
print('object P inited')
class p(objectP('asd')):
pass
which now produces an output of
object P inited
object P inited
What does the line of code above mean? Calling __init__ when subclassing?
Why is object P inited printed twice?
The example code you list at the top of your question works because the namedtuple function (which isn't actually a class itself) returns a class. You're inheriting from that returned class, not from namedtuple itself.
The same structure doesn't work when you use it in your other code because calling the ObjectP class returns an instance when you call it, and that instance isn't a class that can be inherited from.
You can write a function that returns a class, like namedtuple does. You can also write a class who's instances are other classes. That's called a "metatype" and in Python 3 metatypes need to inherit from the type class.
class MyMeta(type):
def __new__(meta, name, bases, dct):
print("creating a new type named", name)
return super().__new__(meta, name, bases, dct)
class MyClass1(MyMeta("Base", (), {})): # you can inherit from an instance
pass
class MyClass2(metaclass=MyMeta): # or use the special metaclass syntax
pass
While metaclasses are neat, they can be a bit confusing if you're new to them. It gets a bit metaphysical, with type being an instance of itself (and a subclass of object for good measure). This stuff is the magical core of Python's type system, and you don't really need to understand it to use classes in ordinary ways.

Is it safe to make two class objects with the same name?

It's possible to use type in Python to create a new class object, as you probably know:
A = type('A', (object,), {})
a = A() # create an instance of A
What I'm curious about is whether there's any problem with creating different class objects with the same name, eg, following on from the above:
B = type('A', (object,), {})
In other words, is there an issue with this second class object, B, having the same name as our first class object, A?
The motivation for this is that I'd like to get a clean copy of a class to apply different decorators to without using the inheritance approach described in this question.
So I'd like to define a class normally, eg:
class Fruit(object):
pass
and then make a fresh copy of it to play with:
def copy_class(cls):
return type(cls.__name__, cls.__bases__, dict(cls.__dict__))
FreshFruit = copy_class(fruit)
In my testing, things I do with FreshFruit are properly decoupled from things I do to Fruit.
However, I'm unsure whether I should also be mangling the name in copy_class in order to avoid unexpected problems.
In particular, one concern I have is that this could cause the class to be replaced in the module's dictionary, such that future imports (eg, from module import Fruit return the copied class).
There is no reason why you can't have 2 classes with the same __name__ in the same module if you want to and have a good reason to do so.
e.g. In your example from module import Fruit -- python doesn't care at all about the __name__ of the class. It looks in the module's globals for Fruit and imports what it finds there.
Note that, in general, this approach isn't great if you're using super (although the same can be said for class decorators ...):
class A(Base):
def foo(self):
super(A, self).foo()
B = copy_class(A)
In this case, when B.foo is called, it will end up calling super(A, self) which could lead to funky behaviour in a number of circumstances. . .

Categories

Resources