Strange Nuance of self in Method Argument List - python

I've encountered a pythonic curiosity whose meaning eludes me.
I've found that method dispatch using a dictionary in a class appears to work differently, depending on whether the dispatch is done in __init__(). The difference is whether the selected method is invoked with or without the self argument.
Code illustration:
#!/usr/bin/python
class strange(object):
def _eek(): # no self argument
print "Hi!\n"
dsp_dict = {"Get_eek" : _eek}
noideek = dsp_dict["Get_eek"]
def __init__(self):
self.ideek = self.dsp_dict["Get_eek"]
self.ideek2 = self._eek
self.ideek3 = self.noideek
def call_ideek(self):
try:
self.ideek()
except TypeError:
print "Alas!\n"
def call_ideek2(self):
try:
self.ideek2()
except TypeError:
print "Alas!\n"
def call_ideek3(self):
try:
self.ideek3()
except TypeError:
print "Alas!\n"
def call_noideek(self):
try:
self.noideek()
except TypeError:
print "Alas!\n"
x=strange()
print "Method routed through __init__() using the dictionary:"
x.call_ideek()
print "Method routed through __init__() directly:"
x.call_ideek2()
print "Method routed through __init__() using attribute set from dictionary:"
x.call_ideek3()
print "Method not routed through __init__():"
x.call_noideek()
Running this, I see:
I, kazoo > ./curio.py
Method routed through __init__() using the dictionary:
Hi!
Method routed through __init__() directly:
Alas!
Method routed through __init__() using attribute set from dictionary:
Alas!
Method not routed through __init__():
Alas!
The try-except clauses are catching this sort of thing:
Traceback (most recent call last):
File "./curio.py", line 19, in <module>
x.call_noideek()
TypeError: call_noideek() takes no arguments (1 given)
That is, if the indirection is accomplished in __init__ by reference to the dictionary, the resulting method is not called with the implicit self argument.
But if the indirection is accomplished either in __init__ by direct reference to _eek(), or by creating a new attribute (noideek) and setting it from the dictionary, or even in __init__ by reference to the attribute originally set from the dictionary, then the self argument is in the call list.
I can work with this, but I don't understand it. Why the difference in call signature?

Have a Look at this
>>> x.ideek
<function _eek at 0x036AB130>
>>> x.ideek2
<bound method strange._eek of <__main__.strange object at 0x03562C30>>
>>> x.ideek3
<bound method strange._eek of <__main__.strange object at 0x03562C30>>
>>> x.noideek
<bound method strange._eek of <__main__.strange object at 0x03562C30>>
>>> x.dsp_dict
{'Get_eek': <function _eek at 0x036AB130>}
>>> x._eek
<bound method strange._eek of <__main__.strange object at 0x03562C30>>
You can see the difference between static methods and class methods here.
When you store the class method in that dict, it loses the information about it's enclosing class and is treated as a function (see output of x.dsp_dict).
Only if you assign that function to noideek in the class context, it will then become a class method again.
Whereas when referencing the dict from the init method, python threats it as a static method ("function") not changing anything and omnitts the self parameter. (ideek)
ideek2 and ideek3 can be seen as "aliases" where that class method is only re-referenced.

Related

Python base class can be object without __mro_entries__

Today I have discovered that python object without __mro_entries__ can be used as a base class.
Example:
class Base:
def __init__(self, *args):
self.args = args
def __repr__(self):
return f'{type(self).__name__}(*{self.args!r})'
class Delivered(Base):
pass
b = Base()
d = Delivered()
class Foo(b, d):
pass
print(type(Foo) is Delivered)
print(Foo)
True
Delivered(*('Foo', (Base(*()), Delivered(*())), {'__module__': '__main__', '__qualname__': 'Foo'}))
As a result Foo will be instance of a Delivered class and it's not a valid type.
I do understand use case of __mro_entries__ but what use case of using object without __mro_entries__ as a base class. Is it a bug at python?
TL;DR Not a bug, but an extreme abuse of the class statement.
A class statement is equivalent to a call to a metaclass. Lacking an explicit metaclass keyword argument, the metaclass has to be inferred from the base class(es). Here, the "metaclass" of the "class" b is Base, while the metaclass of d is Delivered. Since each is a non-strict subclass of a common metaclass (Base), Delivered is chosen as the more specific metaclass.
>>> Delivered('Foo', (b, d), {})
Delivered(*('Foo', (Base(*()), Delivered(*())), {}))
Delivered can be used as a metaclass because it accepts the same arguments that the class statement expects a metaclass to accept: a string for the name of the type, a sequence of parent classes, and a mapping to use as the namespace. In this case, Delivered doesn't use them to create a type; it simply prints the arguments.
As a result, Foo is bound to an instance of Delivered, not a type. So Foo is a class only in the sense that it was produced by a class statement: it is decidedly not a type.
>>> issubclass(Foo, Delivered)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: issubclass() arg 1 must be a class
>>> Foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'Delivered' object is not callable

super() doesn't work in classmethod

ctypes has a classmethod from_buffer. I'm trying to add some custom processing to from_buffer() in a subclass, but I'm having trouble calling super(). Here is an example:
from ctypes import c_char, Structure
class Works(Structure):
_fields_ = [
("char", c_char),
]
class DoesntWork(Works):
#classmethod
def from_buffer(cls, buf):
print "do some extra stuff"
return super(DoesntWork, cls).from_buffer(buf)
print Works.from_buffer(bytearray('c')).char
print DoesntWork.from_buffer(bytearray('c')).char
This results in the error:
c
do some extra stuff
Traceback (most recent call last):
File "superctypes.py", line 18, in <module>
print DoesntWork.from_buffer(bytearray('c')).char
File "superctypes.py", line 14, in from_buffer
return super(DoesntWork, cls).from_buffer(buf)
AttributeError: 'super' object has no attribute 'from_buffer'
What am I missing? Why doesn't super work here?
from_buffer is not actually a class method on Structure; it is a method on Structure's type (that is, its metaclass). As such, it can't be overridden in the usual fashion: it's like asking to override a normal method for a single object, not a class.
Calling type(cls).from_buffer(cls,buf) works. It's pretty terrible, but I don't immediately see another option.

What exactly does "AttributeError: temp instance has no attribute '__getitem__'" mean?

I'm trying to understand a problem I'm having with python 2.7 right now.
Here is my code from the file test.py:
class temp:
def __init__(self):
self = dict()
self[1] = 'bla'
Then, on the terminal, I enter:
from test import temp
a=temp
if I enter a I get this:
>>> a
<test.temp instance at 0x10e3387e8>
And if I try to read a[1], I get this:
>>> a[1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: temp instance has no attribute '__getitem__'
Why does this happen?
First, the code you posted cannot yield the error you noted. You have not instantiated the class; a is merely another name for temp. So your actual error message will be:
TypeError: 'classobj' object has no attribute '__getitem__'
Even if you instantiate it (a = temp()) it still won't do what you seem to expect. Assigning self = dict() merely changes the value of the variable self within your __init__() method; it does not do anything to the instance. When the __init__() method ends, this variable goes away, since you did not store it anywhere else.
It seems as if you might want to subclass dict instead:
class temp(dict):
def __init__(self):
self[1] = 'bla'

using an object passed as a function argument (function is defined inside another class)

I am trying to access an object which was passed to my function (defined inside my class).
Essentially I am invoking a function publish_alert defined inside class AlertPublishInterface.
caller passes into publish_alert an instance of a class called AlertVO
once I receive this passed argument instance via publish_alert, I am simply trying to access the data members of the passed argument instance inside class AlertPublishInterface (in which called function publish_alert is defined.
I get AttributeError in step 2, i.e., when accessing members of the passed argument instance as:
AttributeError: AlertPublishInterface instance has no attribute 'alert_name'
Here is code snippet:
AlertPublishInterface file:
import datetime
import logging.config
import django_model_importer
logging.config.fileConfig('logging.conf')
logger = logging.getLogger('alert_publish_interface')
from alert.models import AlertRule #Database table objects defined in the model file
from alert.models import AlertType #Database table objects defined in the model file
import AlertVO #This is instance whose members am trying to simple access below...!
class AlertPublishInterface:
def publish_alert(o_alert_vo, dummy_remove):
print o_alert_vo.alert_name #-----1----#
alerttype_id = AlertType.objects.filter(o_alert_vo.alert_name,
o_alert_vo.alert_category, active_exact=1) #-----2----#
return
AlertVO is defined as:
class AlertVO:
def __init__(self, alert_name, alert_category, notes,
monitor_item_id, monitor_item_type, payload):
self.alert_name = alert_name
self.alert_category = alert_category
self.notes = notes
self.monitor_item_id = monitor_item_id
self.monitor_item_type = monitor_item_type
self.payload = payload
calling code snippet (which invokes AlertPublishInterface's publish_alert function):
from AlertVO import AlertVO
from AlertPublishInterface import AlertPublishInterface;
o_alert_vo = AlertVO(alert_name='BATCH_SLA', alert_category='B',
notes="some notes", monitor_item_id=2, monitor_item_type='B',
payload='actual=2, expected=1')
print o_alert_vo.alert_name
print o_alert_vo.alert_category
print o_alert_vo.notes
print o_alert_vo.payload
alert_publish_i = AlertPublishInterface()
alert_publish_i.publish_alert(o_alert_vo)
However it fails at lines marked #-----1----# and #-----2---# above with type error, seems like it's associating AlertVO object (the o_alert_vo instance) with AlertPublishInterface class:
complete block of screen output at run:
python test_publisher.py
In test_publisher
BATCH_SLA
B
some notes
actual=2, expected=1
Traceback (most recent call last):
File "test_publisher.py", line 17, in
alert_publish_i.publish_alert(o_alert_vo.alert_name)
File "/home/achmon/data_process/AlertPublishInterface.py", line 26, in publish_alert
print o_alert_vo.alert_name
AttributeError: AlertPublishInterface instance has no attribute 'alert_name'
Can't rid of above error after a lot of searching around...can someone please help...?
Thanks...!(kinda urgent too...!)
The reason you are getting this error is because the first argument is the class itself. Normally, this is called self.
I can also identify this as being django (in which case, you should also be inheriting, if not from some other django class, then from object to make it a new style class)
Anyway, just add self as your first argument in publish_ alert and it will probably stop throwing that error.
def publish_alert(o_alert_vo, dummy_remove): should be def publish_alert(self, o_alert_vo, dummy_remove):

Dynamically binding Python methods to an instance correctly binds the method names, but not the method

I'm writing a client for a group of RESTful services. The body of the REST calls have the same XML structure, given parameters. There are several dozen calls, and I will not be implementing all of them. As such, I want to make them easy to specify and easy to use. The REST methods are grouped by functionality in separate modules and will need to share the same urllib2 opener for authentication and cookies. Here's an example of how a method is declared:
#rest_method('POST', '/document')
def createDocument(id, title, body):
# possibly some validation on the arguments
pass
All the developer has to care about is validation. The format of the XML (for POST and PUT) or the URL (for GET and DELETE) and the deserialization of the response is done in helper methods. The decorated methods are collected in a client object from which they will be executed and processed. For example:
c = RESTClient('http://foo.com', username, password)
c.createDocument(1, 'title', 'body')
The code is done. The only issue is in attaching the decorated methods to the client class. Although all the decorated methods can be seen in the client instance, they all share the same definition, namely the last one to be bound. Here's a brief example which duplicates the behaviour I'm seeing:
import types
class C(object): pass
def one(a): return a
def two(a, b): return a+b
def bracketit(t): return '(%s)' % t
c = C()
for m in (one, two):
new_method = lambda self, *args, **kwargs:\
bracketit(m(*args, **kwargs))
method = types.MethodType(new_method, c, C)
setattr(C, m.__name__, method)
print c.one
print c.two
print c.two(1, 2)
print c.one(1)
When I run this, I get the following output:
<bound method C.<lambda> of <__main__.C object at 0x1003b0d90>>
<bound method C.<lambda> of <__main__.C object at 0x1003b0d90>>
(3)
Traceback (most recent call last):
File "/tmp/test.py", line 19, in <module>
print c.one(1)
File "/tmp/test.py", line 12, in <lambda>
bracketit(m(*args, **kwargs))
TypeError: two() takes exactly 2 arguments (1 given)
I'm not sure why the two methods are bound in the same way. I haven't been able to find much documentation on how instancemethod binds methods to instances. What is going on underneath the hood, and how would I fix the above code so that the second call prints '(1)'?
The lambda is calling m, pulling it from the local scope. After the end of the for loop, m is set to two. Calling c.one or c.two will result in two being called.
You can tell that two is being called by looking at the last line of your traceback:
TypeError: two() takes exactly 2 arguments (1 given)
A good demonstration of what is going on can be found here.
This should do what you expect, but is a kinda messy:
class C(object): pass
def one(a): return a
def two(a, b): return a+b
def bracketit(t): return '(%s)' % t
c = C()
for m in (one, two):
def build_method(m):
return (lambda self, *args, **kwargs:
bracketit(m(*args, **kwargs)))
method = build_method(m)
setattr(C, m.__name__, method)
print c.one
print c.two
print c.two(1, 2)
print c.one(1)
I also removed the explicit creation of an unbound method, as it is unnessesary.
The problem is the variable m is left as two at the end of the loop, and that affects the definitions made during the loop. You can fix it by creating closures with nested functions:
for m in (one, two):
def make_method(m):
def new_method(self, *args, **kwargs):
return bracketit(m(*args, **kwargs))
return new_method
method = types.MethodType(make_method(m), c, C)
setattr(C, m.__name__, method)
when run in your test code produces:
<bound method C.new_method of <__main__.C object at 0x0135EF30>>
<bound method C.new_method of <__main__.C object at 0x0135EF30>>
(3)
(1)

Categories

Resources