Python base class can be object without __mro_entries__ - python

Today I have discovered that python object without __mro_entries__ can be used as a base class.
Example:
class Base:
def __init__(self, *args):
self.args = args
def __repr__(self):
return f'{type(self).__name__}(*{self.args!r})'
class Delivered(Base):
pass
b = Base()
d = Delivered()
class Foo(b, d):
pass
print(type(Foo) is Delivered)
print(Foo)
True
Delivered(*('Foo', (Base(*()), Delivered(*())), {'__module__': '__main__', '__qualname__': 'Foo'}))
As a result Foo will be instance of a Delivered class and it's not a valid type.
I do understand use case of __mro_entries__ but what use case of using object without __mro_entries__ as a base class. Is it a bug at python?

TL;DR Not a bug, but an extreme abuse of the class statement.
A class statement is equivalent to a call to a metaclass. Lacking an explicit metaclass keyword argument, the metaclass has to be inferred from the base class(es). Here, the "metaclass" of the "class" b is Base, while the metaclass of d is Delivered. Since each is a non-strict subclass of a common metaclass (Base), Delivered is chosen as the more specific metaclass.
>>> Delivered('Foo', (b, d), {})
Delivered(*('Foo', (Base(*()), Delivered(*())), {}))
Delivered can be used as a metaclass because it accepts the same arguments that the class statement expects a metaclass to accept: a string for the name of the type, a sequence of parent classes, and a mapping to use as the namespace. In this case, Delivered doesn't use them to create a type; it simply prints the arguments.
As a result, Foo is bound to an instance of Delivered, not a type. So Foo is a class only in the sense that it was produced by a class statement: it is decidedly not a type.
>>> issubclass(Foo, Delivered)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: issubclass() arg 1 must be a class
>>> Foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'Delivered' object is not callable

Related

How should we actually define constants with namespaces in Python 3? [duplicate]

I would like to store a bunch of variables under a Python namespace without creating a separate module. I notice that the result of ArgumentParser's parse_args() is a argparse.Namespace object. You can access the arguments through dot-syntax.
from argparse import ArgumentParser
parser = ArgumentParser()
# some arg definitions here...
args = parser.parse_args() # returns a `argparse.Namespace` object
How can I create the equivalent of an argparse.Namespace? I know I can do something similar with a dict but I would like to use dot-syntax. Is there any built-in class that just lets you assign arbitrary attributes?
Starting with python3.3 you can use types.SimpleNamespace.
However an alternative is simply:
class Namespace(object):
pass
namespaceA = Namespace()
namespaceA.x = 1
The full code for SimpleNamespace isn't much longer.
Note that you cannot simply use an object instance:
>>> o = object()
>>> o.x = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute 'x'
This is because instances of object do not have a __dict__ attribute:
>>> vars(object())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: vars() argument must have __dict__ attribute
Which means you cannot set the attributes of an instance of object.
Any object subclass that does not have the __slots__ attribute set does have the __dict__ which is used (by default) to store/retrieve attributes:
>>> class Namespace(object):
... pass
...
>>> a = Namespace()
>>> a.x = 1 # same as as.__dict__['a'] = 1
>>> a.__dict__
{'x': 1}
For further information about attribute setting/lookup you should learn about descriptors.
A class can be used as a namespace, where the variables are class members:
class Namespace1:
foo = 'a'
bar = 5
To prevent callers from trying to instantiate, you can use a baseclass like:
class objectless(object):
def __new__(cls, *args, **kwargs):
raise RuntimeError('%s should not be instantiated' % cls)
And use it like:
class Namespace1(objectless):
...
It sounds like you want a python class. See the docs.
Depending on what you want exactly, you can define a bunch of variables as attributes of a class (either a variable of an instance or of the class itself) and access them that way.
If you want "the equivalent of an argparse.Namespace", use argparse.Namespace:
from argparse import Namespace
ns = Namespace(a=1)
print ns.a
If I'm understanding correctly, you want to dynamically add attributes to it. For example, a class parses command-line flags you access them directly like args.verbose, right? If so, you may be thinking of setattr() that lets you add arbitrary attributes.
class Foo(object):
pass
foo = Foo()
setattr(foo, 'ack', 'bar')
print(foo.ack) # prints 'bar'

Strange Nuance of self in Method Argument List

I've encountered a pythonic curiosity whose meaning eludes me.
I've found that method dispatch using a dictionary in a class appears to work differently, depending on whether the dispatch is done in __init__(). The difference is whether the selected method is invoked with or without the self argument.
Code illustration:
#!/usr/bin/python
class strange(object):
def _eek(): # no self argument
print "Hi!\n"
dsp_dict = {"Get_eek" : _eek}
noideek = dsp_dict["Get_eek"]
def __init__(self):
self.ideek = self.dsp_dict["Get_eek"]
self.ideek2 = self._eek
self.ideek3 = self.noideek
def call_ideek(self):
try:
self.ideek()
except TypeError:
print "Alas!\n"
def call_ideek2(self):
try:
self.ideek2()
except TypeError:
print "Alas!\n"
def call_ideek3(self):
try:
self.ideek3()
except TypeError:
print "Alas!\n"
def call_noideek(self):
try:
self.noideek()
except TypeError:
print "Alas!\n"
x=strange()
print "Method routed through __init__() using the dictionary:"
x.call_ideek()
print "Method routed through __init__() directly:"
x.call_ideek2()
print "Method routed through __init__() using attribute set from dictionary:"
x.call_ideek3()
print "Method not routed through __init__():"
x.call_noideek()
Running this, I see:
I, kazoo > ./curio.py
Method routed through __init__() using the dictionary:
Hi!
Method routed through __init__() directly:
Alas!
Method routed through __init__() using attribute set from dictionary:
Alas!
Method not routed through __init__():
Alas!
The try-except clauses are catching this sort of thing:
Traceback (most recent call last):
File "./curio.py", line 19, in <module>
x.call_noideek()
TypeError: call_noideek() takes no arguments (1 given)
That is, if the indirection is accomplished in __init__ by reference to the dictionary, the resulting method is not called with the implicit self argument.
But if the indirection is accomplished either in __init__ by direct reference to _eek(), or by creating a new attribute (noideek) and setting it from the dictionary, or even in __init__ by reference to the attribute originally set from the dictionary, then the self argument is in the call list.
I can work with this, but I don't understand it. Why the difference in call signature?
Have a Look at this
>>> x.ideek
<function _eek at 0x036AB130>
>>> x.ideek2
<bound method strange._eek of <__main__.strange object at 0x03562C30>>
>>> x.ideek3
<bound method strange._eek of <__main__.strange object at 0x03562C30>>
>>> x.noideek
<bound method strange._eek of <__main__.strange object at 0x03562C30>>
>>> x.dsp_dict
{'Get_eek': <function _eek at 0x036AB130>}
>>> x._eek
<bound method strange._eek of <__main__.strange object at 0x03562C30>>
You can see the difference between static methods and class methods here.
When you store the class method in that dict, it loses the information about it's enclosing class and is treated as a function (see output of x.dsp_dict).
Only if you assign that function to noideek in the class context, it will then become a class method again.
Whereas when referencing the dict from the init method, python threats it as a static method ("function") not changing anything and omnitts the self parameter. (ideek)
ideek2 and ideek3 can be seen as "aliases" where that class method is only re-referenced.

Setting attributes on __func__

In the documentation on instance methods it states that:
Methods also support accessing (but not setting) the arbitrary function attributes on the underlying function object.
But I can't seem to be able to verify that restriction. I tried setting both an arbitrary value and one of the "Special Attributes" of functions:
class cls:
def foo(self):
f = self.foo.__func__
f.a = "some value" # arbitrary value
f.__doc__ = "Documentation"
print(f.a, f.__doc__)
When executed, no errors are produced and the output is as expected:
cls().foo() # prints out f.a, f.__doc__
What is it that I'm misunderstanding with the documentation?
You are misunderstanding what is being said. It says that you can access but not set the attributes of the underlying function object from the method!
>>> class Foo:
... def foo(self):
... self.foo.__func__.a = 1
... print(self.foo.a)
... self.foo.a = 2
...
>>> Foo().foo()
1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in foo
AttributeError: 'method' object has no attribute 'a'
Note how foo.a is updated when you set it on the __func__ value, but you cannot set it directly using self.foo.a = value.
So the function object can be modified as you please, the method wrapper only provides read-only access to the attributes on the underlying function.

Does python consider any method without cls or self arguments implicitly as static?

The following are test classes with methods not taking in cls or self arguments and dont have #staticmethod decorator. They work like normal static methods without complaining about arguments. This seems contrary to my understanding of python methods. Does python automatically treat non-class, non-instance methods as static?
>>> class Test():
... def testme(s):
... print(s)
...
>>> Test.testme('hello')
hello
>>> class Test():
... def testme():
... print('no')
...
>>> Test.testme()
no
P.S: I am using python3.4
It sort of does, yes. However, note that if you call such an "implicit static method" on an instance, you will get an error:
>>> Test().testme()
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
Test().testme()
TypeError: testme() takes 0 positional arguments but 1 was given
This is because the self parameter still gets passed, which doesn't happen with a proper #staticmethod:
>>> class Test:
#staticmethod
def testme():
print('no')
>>> Test.testme()
no
>>> Test().testme()
no
Note that this doesn't work in Python 2:
>>> class Test(object):
... def testme():
... print 'no'
...
>>> Test.testme()
Traceback (most recent call last):
File "<ipython-input-74-09d78063da08>", line 1, in <module>
Test.testme()
TypeError: unbound method testme() must be called with Test instance as first argument (got nothing instead)
But in Python 3, unbound methods were removed, as Alex Martelli points out in this answer. So really all you're doing is calling a plain function that happens to be defined inside the Test class.

single argument super() - what's it for? [duplicate]

I wonder when to use what flavour of Python 3 super().
Help on class super in module builtins:
class super(object)
| super() -> same as super(__class__, <first argument>)
| super(type) -> unbound super object
| super(type, obj) -> bound super object; requires isinstance(obj, type)
| super(type, type2) -> bound super object; requires issubclass(type2, type)
Until now I've used super() only without arguments and it worked as expected (by a Java developer).
Questions:
What does "bound" mean in this context?
What is the difference between bound and unbound super object?
When to use super(type, obj) and when super(type, type2)?
Would it be better to name the super class like in Mother.__init__(...)?
Let's use the following classes for demonstration:
class A(object):
def m(self):
print('m')
class B(A): pass
Unbound super object doesn't dispatch attribute access to class, you have to use descriptor protocol:
>>> super(B).m
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'super' object has no attribute 'm'
>>> super(B).__get__(B(), B)
<super: <class 'B'>, <B object>>
super object bound to instance gives bound methods:
>>> super(B, B()).m
<bound method B.m of <__main__.B object at 0xb765dacc>>
>>> super(B, B()).m()
m
super object bound to class gives function (unbound methods in terms of Python 2):
>>> super(B, B).m
<function m at 0xb761482c>
>>> super(B, B).m()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: m() takes exactly 1 positional argument (0 given)
>>> super(B, B).m(B())
m
See Michele Simionato's "Things to Know About Python Super" blog posts series (1, 2, 3) for more information
A quick note, the new usage of super is outlined in PEP3135 New Super which was implemented in python 3.0. Of particular relevance;
super().foo(1, 2)
to replace the old:
super(Foo, self).foo(1, 2)

Categories

Resources