Why doesn't the weakref work on this bound method? - python

I have a project where i'm trying to use weakrefs with callbacks, and I don't understand what I'm doing wrong. I have created simplified test that shows the exact behavior i'm confused with.
Why is it that in this test test_a works as expected, but the weakref for self.MyCallbackB disappears between the class initialization and calling test_b? I thought like as long as the instance (a) exists, the reference to self.MyCallbackB should exist, but it doesn't.
import weakref
class A(object):
def __init__(self):
def MyCallbackA():
print 'MyCallbackA'
self.MyCallbackA = MyCallbackA
self._testA = weakref.proxy(self.MyCallbackA)
self._testB = weakref.proxy(self.MyCallbackB)
def MyCallbackB(self):
print 'MyCallbackB'
def test_a(self):
self._testA()
def test_b(self):
self._testB()
if __name__ == '__main__':
a = A()
a.test_a()
a.test_b()

You want a WeakMethod.
An explanation why your solution doesn't work can be found in the discussion of the recipe:
Normal weakref.refs to bound methods don't quite work the way one expects, because bound methods are first-class objects; weakrefs to bound methods are dead-on-arrival unless some other strong reference to the same bound method exists.

According to the documentation for the Weakref module:
In the following, the term referent means the object which is referred to
by a weak reference.
A weak reference to an object is not
enough to keep the object alive: when
the only remaining references to a
referent are weak references, garbage
collection is free to destroy the
referent and reuse its memory for
something else.
Whats happening with MyCallbackA is that you are holding a reference to it in the instances of A, thanks to -
self.MyCallbackA = MyCallbackA
Now, there is no reference to the bound method MyCallbackB in your code. It is held only in a.__class__.__dict__ as an unbound method. Basically, a bound method is created (and returned to you) when you do self.methodName. (AFAIK, a bound method works like a property -using a descriptor (read-only): at least for new style classes. I am sure, something similar i.e. w/o descriptors happens for old style classes. I'll leave it to someone more experienced to verify the claim about old style classes.) So, self.MyCallbackB dies as soon as the weakref is created, because there is no strong reference to it!
My conclusions are based on :-
import weakref
#Trace is called when the object is deleted! - see weakref docs.
def trace(x):
print "Del MycallbackB"
class A(object):
def __init__(self):
def MyCallbackA():
print 'MyCallbackA'
self.MyCallbackA = MyCallbackA
self._testA = weakref.proxy(self.MyCallbackA)
print "Create MyCallbackB"
# To fix it, do -
# self.MyCallbackB = self.MyCallBackB
# The name on the LHS could be anything, even foo!
self._testB = weakref.proxy(self.MyCallbackB, trace)
print "Done playing with MyCallbackB"
def MyCallbackB(self):
print 'MyCallbackB'
def test_a(self):
self._testA()
def test_b(self):
self._testB()
if __name__ == '__main__':
a = A()
#print a.__class__.__dict__["MyCallbackB"]
a.test_a()
Output
Create MyCallbackB
Del MycallbackB
Done playing with MyCallbackB
MyCallbackA
Note :
I tried verifying this for old style classes. It turned out that "print a.test_a.__get__"
outputs -
<method-wrapper '__get__' of instancemethod object at 0xb7d7ffcc>
for both new and old style classes. So it may not really be a descriptor, just something descriptor-like. In any case, the point is that a bound-method object is created when you acces an instance method through self, and unless you maintain a strong reference to it, it will be deleted.

The other answers address the why in the original question, but either don't provide a workaround or refer to external sites.
After working through several other posts on StackExchange on this topic, many of which are marked as duplicates of this question, I finally came to a succinct workaround. When I know the nature of the object I'm dealing with, I use the weakref module; when I might instead be dealing with a bound method (as occurs in my code when using event callbacks), I now use the following WeakRef class as a direct replacement for weakref.ref(). I've tested this with Python 2.4 through and including Python 2.7, but not on Python 3.x.
class WeakRef:
def __init__ (self, item):
try:
self.method = weakref.ref (item.im_func)
self.instance = weakref.ref (item.im_self)
except AttributeError:
self.reference = weakref.ref (item)
else:
self.reference = None
def __call__ (self):
if self.reference != None:
return self.reference ()
instance = self.instance ()
if instance == None:
return None
method = self.method ()
return getattr (instance, method.__name__)

Related

The behaviour of 'super()' when type is 'object'?

The document about super() found on the Python website says it returns a proxy object that delegates method calls to parent or sibling class. Information found at super considered super, how does super() work with multiple inheritance and super considered harmful explains that in fact the next method in the mro is used. My question is, what happens if super(object, self).some_method() is used? Since object typically appears at the end of a mro list, I guess the search will hit the end immediately with an exception. But in fact, it seems that the methods of the proxy itself were called, as shown by super(object, self).__repr__() showing the super object itself. I wonder whether the behavior of super() with object is not to delegate method at all.
If this is the case, I wonder any reliable material ever mentions it and whether it applies to other Python implementations.
class X(object):
def __init__(self):
# This shows [X, object].
print X.mro()
# This shows a bunch of attributes that a super object can have.
print dir(super(object, self))
# This shows something similar to <super object at xxx>
print(object, self)
# This failed with `super() takes at least one argument`
try:
super(object, self).__init__()
except:
pass
# This shows something like <super <class 'object'>, <'X' object>>.
print(super(object, self).__repr__())
# This shows the repr() of object, like <'X' object at xxx>
print(super(X, self).__repr__())
if __name__ == '__main__':
X()
If super doesn't find something while looking through the method resolution order (MRO) to delegate to (or if you're looking for the attribute __class__) it will check its own attributes.
Because object is always the last type in the MRO (at least to my knowledge it's always the last one) you effectively disabled the delegation and it will only check the super instance.
I found the question really interesting so I went to the source code of super and in particular the delegation part (super.__getattribute__ (in CPython 3.6.5)) and I translated it (roughly) to pure Python accompanied by some additional comments of my own:
class MySuper(object):
def __init__(self, klass, instance):
self.__thisclass__ = klass
self.__self__ = instance
self.__self_class__ = type(instance)
def __repr__(self):
# That's not in the original implementation, it's here for fun
return 'hoho'
def __getattribute__(self, name):
su_type = object.__getattribute__(self, '__thisclass__')
su_obj = object.__getattribute__(self, '__self__')
su_obj_type = object.__getattribute__(self, '__self_class__')
starttype = su_obj_type
# If asked for the __class__ don't go looking for it in the MRO!
if name == '__class__':
return object.__getattribute__(self, '__class__')
mro = starttype.mro()
n = len(mro)
# Jump ahead in the MRO to the passed in class
# excluding the last one because that is skipped anyway.
for i in range(0, n - 1):
if mro[i] is su_type:
break
# The C code only increments by one here, but the C for loop
# actually increases the i variable by one before the condition
# is checked so to get the equivalent code one needs to increment
# twice here.
i += 2
# We're at the end of the MRO. Check if super has this attribute.
if i >= n:
return object.__getattribute__(self, name)
# Go up the MRO
while True:
tmp = mro[i]
dict_ = tmp.__dict__
try:
res = dict_[name]
except:
pass
else:
# We found a match, now go through the descriptor protocol
# so that we get a bound method (or whatever is applicable)
# for this attribute.
f = type(res).__get__
f(res, None if su_obj is starttype else su_obj, starttype)
res = tmp
return res
i += 1
# Not really the nicest construct but it's a do-while loop
# in the C code and I feel like this is the closest Python
# representation of that.
if i < n:
continue
else:
break
return object.__getattribute__(self, name)
As you can see there are some ways you could end up looking up the attribute on super:
If you're looking for the __class__ attribute
If you reached the end of the MRO immediately (by passing in object as first argument)!
If __getattribute__ couldn't find a match in the remaining MRO.
Actually because it works like super you can use that instead (at least as far as the attribute delegation is concerned):
class X(object):
def __init__(self):
print(MySuper(object, self).__repr__())
X()
That will print the hoho from the MySuper.__repr__. Feel free to experiment with that code by inserting some prints to follow the control flow.
I wonder any reliable material ever mentions it and whether it applies to other Python implementations.
What I said above was based on my observations of the CPython 3.6 source, but I think it shouldn't be too different for other Python versions given that the other Python implementations (often) follow CPython.
In fact I also checked:
CPython 2
PyPy (Python 2),
IronPython (Python 2)
And all of them return the __repr__ of super.
Note that Python follows the "We are all consenting adults" style, so I would be surprised if someone bothered to formalize such unusual usages. I mean who would try to delegate to a method of the sibling or parent class of object (the "ultimate" parent class).
super defines a few of its own attributes and needs a way to provide access to them. First is uses the __dunder__ style, which Python reserves for itself and says no library or application should define names that start and end with a double underscore. This means the super object can be confident that nothing will clash with its attributes of __self__, __self_class__ and __thisclass__. So if it searches the mro and doesn't find the requested attribute then it falls back on trying to find the attribute on the super object itself. For instance:
>>> class A:
pass
>>> class B(A):
pass
>>> s = super(A, B())
>>> s.__self__
<__main__.B object at 0x03BE4E70>
>>> s.__self_class__
<class '__main__.B'>
>>> s.__thisclass__
<class '__main__.A'>
Since you have specified object as the type to start looking beyond and because object is always the last type in the mro, then there is no possible candidate for which to fetch the method or attribute. In this situation, super behaves as if it had tried various types looking for the name, but didn't find one. So it tries to fetch the attribute from itself. However, since the super object is also an object it has access to __init__, __repr__ and everything else object defines. And so super returns its own __init__ and __repr__ methods for you.
This is kind of a situation of "ask a silly question (of super) and get a silly answer". That is, super should only ever be called with the first argument as class that the function was defined in. When you call it with object you are getting undefined behaviour.

Python: deletion of self referencing object

I want to ask how to delete an object with a self-reference in Python.
Let's think a class, which is a simple example to know when it is created and when it is deleted:
#!/usr/bin/python
class TTest:
def __init__(self):
self.sub_func= None
print 'Created',self
def __del__(self):
self.sub_func= None
print 'Deleted',self
def Print(self):
print 'Print',self
This class has a variable self.sub_func to which we assume to assign a function. I want to assign a function using an instance of TTest to self.sub_func. See the following case:
def SubFunc1(t):
t.Print()
def DefineObj1():
t= TTest()
t.sub_func= lambda: SubFunc1(t)
return t
t= DefineObj1()
t.sub_func()
del t
The result is:
Created <__main__.TTest instance at 0x7ffbabceee60>
Print <__main__.TTest instance at 0x7ffbabceee60>
that is to say, though we executed "del t", t was not deleted.
I guess the reason is that t.sub_func is a self-referencing object, so reference counter of t does not become zero at "del t", thus t is not deleted by the garbage collector.
To solve this problem, I need to insert
t.sub_func= None
before "del t"; in this time, the output is:
Created <__main__.TTest instance at 0x7fab9ece2e60>
Print <__main__.TTest instance at 0x7fab9ece2e60>
Deleted <__main__.TTest instance at 0x7fab9ece2e60>
But this is strange. t.sub_func is part of t, so I do not want to care about clearing t.sub_func when deleting t.
Could you tell me if you know a good solution?
How to makes sure an object in a reference cycle gets deleted when it is no longer reachable? The simplest solution is not to define a __del__ method. Very few, if any, classes need a __del__ method. Python makes no guarantees about when or even if a __del__ method will get called.
There are several ways you can alleviate this problem.
Use a function rather than a lambda that contains and checks a weak reference. Requires explicit checking that the object is still alive each time the function is called.
Create a unique class for each object so that we can store the function on a class rather than as a monkey-patched function. This could get memory heavy.
Define a property that knows how to get the given function and turn it into a method. My personal favourite as it closely approximates how bound methods are created from a class'es unbound methods.
Using weak references
import weakref
class TTest:
def __init__(self):
self.func = None
print 'Created', self
def __del__(self):
print 'Deleted', self
def print_self(self):
print 'Print',self
def print_func(t):
t.print_self()
def create_ttest():
t = TTest()
weak_t = weakref.ref(t)
def func():
t1 = weak_t()
if t1 is None:
raise TypeError("TTest object no longer exists")
print_func(t1)
t.func = func
return t
if __name__ == "__main__":
t = create_ttest()
t.func()
del t
Creating a unique class
class TTest:
def __init__(self):
print 'Created', self
def __del__(self):
print 'Deleted', self
def print_self(self):
print 'Print',self
def print_func(t):
t.print_self()
def create_ttest():
class SubTTest(TTest):
def func(self):
print_func(self)
SubTTest.func1 = print_func
# The above also works. First argument is instantiated as the object the
# function was called on.
return SubTTest()
if __name__ == "__main__":
t = create_ttest()
t.func()
t.func1()
del t
Using properties
import types
class TTest:
def __init__(self, func):
self._func = func
print 'Created', self
def __del__(self):
print 'Deleted', self
def print_self(self):
print 'Print',self
#property
def func(self):
return types.MethodType(self._func, self)
def print_func(t):
t.print_self()
def create_ttest():
def func(self):
print_func(self)
t = TTest(func)
return t
if __name__ == "__main__":
t = create_ttest()
t.func()
del t
From the official CPython docs:
Objects that have __del__() methods and are part of a reference cycle cause the entire reference cycle to be uncollectable, including objects not necessarily in the cycle but reachable only from it. Python doesn’t collect such cycles automatically because, in general, it isn’t possible for Python to guess a safe order in which to run the __del__() methods. If you know a safe order, you can force the issue by examining the garbage list, and explicitly breaking cycles due to your objects within the list. Note that these objects are kept alive even so by virtue of being in the garbage list, so they should be removed from garbage too. For example, after breaking cycles, do del gc.garbage[:] to empty the list. It’s generally better to avoid the issue by not creating cycles containing objects with __del__() methods, and garbage can be examined in that case to verify that no such cycles are being created.
See also: http://engineering.hearsaysocial.com/2013/06/16/circular-references-in-python/

Garbage collect a class with a reference to its instance?

Consider this code snippet:
import gc
from weakref import ref
def leak_class(create_ref):
class Foo(object):
# make cycle non-garbage collectable
def __del__(self):
pass
if create_ref:
# create a strong reference cycle
Foo.bar = Foo()
return ref(Foo)
# without reference cycle
r = leak_class(False)
gc.collect()
print r() # prints None
# with reference cycle
r = leak_class(True)
gc.collect()
print r() # prints <class '__main__.Foo'>
It creates a reference cycle that cannot be collected, because the referenced instance has a __del__ method. The cycle is created here:
# create a strong reference cycle
Foo.bar = Foo()
This is just a proof of concept, the reference could be added by some external code, a descriptor or anything. If that's not clear to you, remember that each objects mantains a reference to its class:
+-------------+ +--------------------+
| | Foo.bar | |
| Foo (class) +------------>| foo (Foo instance) |
| | | |
+-------------+ +----------+---------+
^ |
| foo.__class__ |
+--------------------------------+
If I could guarantee that Foo.bar is only accessed from Foo, the cycle wouldn't be necessary, as theoretically the instance could hold only a weak reference to its class.
Can you think of a practical way to make this work without a leak?
As some are asking why would external code modify a class but can't control its lifecycle, consider this example, similar to the real-life example I was working to:
class Descriptor(object):
def __get__(self, obj, kls=None):
if obj is None:
try:
obj = kls._my_instance
except AttributeError:
obj = kls()
kls._my_instance = obj
return obj.something()
# usage example #
class Example(object):
foo = Descriptor()
def something(self):
return 100
print Example.foo
In this code only Descriptor (a non-data descriptor) is part of the API I'm implementing. Example class is an example of how the descriptor would be used.
Why does the descriptor store a reference to an instance inside the class itself? Basically for caching purposes. Descriptor required this contract with the implementor: it would be used in any class assuming that
The class has a constructor with no args, that gives an "anonymous instance" (my definition)
The class has some behavior-specific methods (something here).
An instance of the class can stay alive for an undefined amount of time.
It doesn't assume anything about:
How long it takes to construct an object
Whether the class implements del or other magic methods
How long the class is expected to live
Moreover the API was designed to avoid any extra load on the class implementor. I could have moved the responsibility for caching the object to the implementor, but I wanted a standard behavior.
There actually is a simple solution to this problem: make the default behavior to cache the instance (like it does in this code) but allow the implementor to override it if they have to implement __del__.
Of course this wouldn't be as simple if we assumed that the class state had to be preserved between calls.
As a starting point, I was coding a "weak object", an implementation of object that only kept a weak reference to its class:
from weakref import proxy
def make_proxy(strong_kls):
kls = proxy(strong_kls)
class WeakObject(object):
def __getattribute__(self, name):
try:
attr = kls.__dict__[name]
except KeyError:
raise AttributeError(name)
try:
return attr.__get__(self, kls)
except AttributeError:
return attr
def __setattr__(self, name, value):
# TODO: implement...
pass
return WeakObject
Foo.bar = make_proxy(Foo)()
It appears to work for a limited number of use cases, but I'd have to reimplement the whole set of object methods, and I don't know how to deal with classes that override __new__.
For your example, why don't you store _my_instance in a dict on the descriptor class, rather than on the class holding the descriptor? You could use a weakref or WeakValueDictionary in that dict, so that when the object disappears the dict will just lose its reference and the descriptor will create a new one on the next access.
Edit: I think you have a misunderstanding about the possibility of collecting the class while the instance lives on. Methods in Python are stored on the class, not the instance (barring peculiar tricks). If you have an object obj of class Class, and you allowed Class to be garbage collected while obj still exists, then calling a method obj.meth() on the object would fail, because the method would have disappeared along with the class. That is why your only option is to weaken your class->obj reference; even if you could make objects weakly reference their class, all it would do is break the class if the weakness ever "took effect" (i.e., if the class were collected while an instance still existed).
The problem you're facing is just a special case of the general ref-cycle-with-__del__ problem.
I don't see anything unusual in the way the cycles are created in your case, which is to say, you should resort to the standard ways of avoiding the general problem.
I think implementing and using a weak object would be hard to get right, and you would still need to remember to use it in all places where you define __del__. It doesn't sound like the best approach.
Instead, you should try the following:
consider not defining __del__ in your class (recommended)
in classes which define __del__, avoid reference cycles (in general, it might be hard/impossible to make sure no cycles are created anywhere in your code. In your case, seems like you want the cycles to exist)
explicitly break the cycles, using del (if there are appropriate points to do that in your code)
scan the gc.garbage list, and explicitly break reference cycles (using del)

I don't understand this python __del__ behaviour

Can someone explain why the following code behaves the way it does:
import types
class Dummy():
def __init__(self, name):
self.name = name
def __del__(self):
print "delete",self.name
d1 = Dummy("d1")
del d1
d1 = None
print "after d1"
d2 = Dummy("d2")
def func(self):
print "func called"
d2.func = types.MethodType(func, d2)
d2.func()
del d2
d2 = None
print "after d2"
d3 = Dummy("d3")
def func(self):
print "func called"
d3.func = types.MethodType(func, d3)
d3.func()
d3.func = None
del d3
d3 = None
print "after d3"
The output (note that the destructor for d2 is never called) is this (python 2.7)
delete d1
after d1
func called
after d2
func called
delete d3
after d3
Is there a way to "fix" the code so the destructor is called without deleting the method added? I mean, the best place to put the d2.func = None would be in the destructor!
Thanks
[edit] Based on the first few answers, I'd like to clarify that I'm not asking about the merits (or lack thereof) of using __del__. I tried to create the shortest function that would demonstrate what I consider to be non-intuitive behavior. I'm assuming a circular reference has been created, but I'm not sure why. If possible, I'd like to know how to avoid the circular reference....
You cannot assume that __del__ will ever be called - it is not a place to hope that resources are automagically deallocated. If you want to make sure that a (non-memory) resource is released, you should make a release() or similar method and then call that explicitly (or use it in a context manager as pointed out by Thanatos in comments below).
At the very least you should read the __del__ documentation very closely, and then you should probably not try to use __del__. (Also refer to the gc.garbage documentation for other bad things about __del__)
I'm providing my own answer because, while I appreciate the advice to avoid __del__, my question was how to get it to work properly for the code sample provided.
Short version: The following code uses weakref to avoid the circular reference. I thought I'd tried this before posting the question, but I guess I must have done something wrong.
import types, weakref
class Dummy():
def __init__(self, name):
self.name = name
def __del__(self):
print "delete",self.name
d2 = Dummy("d2")
def func(self):
print "func called"
d2.func = types.MethodType(func, weakref.ref(d2)) #This works
#d2.func = func.__get__(weakref.ref(d2), Dummy) #This works too
d2.func()
del d2
d2 = None
print "after d2"
Longer version:
When I posted the question, I did search for similar questions. I know you can use with instead, and that the prevailing sentiment is that __del__ is BAD.
Using with makes sense, but only in certain situations. Opening a file, reading it, and closing it is a good example where with is a perfectly good solution. You've gone a specific block of code where the object is needed, and you want to clean up the object and the end of the block.
A database connection seems to be used often as an example that doesn't work well using with, since you usually need to leave the section of code that creates the connection and have the connection closed in a more event-driven (rather than sequential) timeframe.
If with is not the right solution, I see two alternatives:
You make sure __del__ works (see this blog for a better
description of weakref usage)
You use the atexit module to run a callback when your program closes. See this topic for example.
While I tried to provide simplified code, my real problem is more event-driven, so with is not an appropriate solution (with is fine for the simplified code). I also wanted to avoid atexit, as my program can be long-running, and I want to be able to perform the cleanup as soon as possible.
So, in this specific case, I find it to be the best solution to use weakref and prevent circular references that would prevent __del__ from working.
This may be an exception to the rule, but there are use-cases where using weakref and __del__ is the right implementation, IMHO.
Instead of del, you can use the with operator.
http://effbot.org/zone/python-with-statement.htm
just like with filetype objects, you could something like
with Dummy('d1') as d:
#stuff
#d's __exit__ method is guaranteed to have been called
del doesn't call __del__
del in the way you are using removes a local variable. __del__ is called when the object is destroyed. Python as a language makes no guarantees as to when it will destroy an object.
CPython as the most common implementation of Python, uses reference counting. As a result del will often work as you expect. However it will not work in the case that you have a reference cycle.
d3 -> d3.func -> d3
Python doesn't detect this and so won't clean it up right away. And its not just reference cycles. If an exception is throw you probably want to still call your destructor. However, Python will typically hold onto to the local variables as part of its traceback.
The solution is not to depend on the __del__ method. Rather, use a context manager.
class Dummy:
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
print "Destroying", self
with Dummy() as dummy:
# Do whatever you want with dummy in here
# __exit__ will be called before you get here
This is guaranteed to work, and you can even check the parameters to see whether you are handling an exception and do something different in that case.
A full example of a context manager.
class Dummy(object):
def __init__(self, name):
self.name = name
def __enter__(self):
return self
def __exit__(self, exct_type, exce_value, traceback):
print 'cleanup:', d
def __repr__(self):
return 'Dummy(%r)' % (self.name,)
with Dummy("foo") as d:
print 'using:', d
print 'later:', d
It seems to me the real heart of the matter is here:
adding the functions is dynamic (at runtime) and not known in advance
I sense that what you are really after is a flexible way to bind different functionality to an object representing program state, also known as polymorphism. Python does that quite well, not by attaching/detaching methods, but by instantiating different classes. I suggest you look again at your class organization. Perhaps you need to separate a core, persistent data object from transient state objects. Use the has-a paradigm rather than is-a: each time state changes, you either wrap the core data in a state object, or you assign the new state object to an attribute of the core.
If you're sure you can't use that kind of pythonic OOP, you could still work around your problem another way by defining all your functions in the class to begin with and subsequently binding them to additional instance attributes (unless you're compiling these functions on the fly from user input):
class LongRunning(object):
def bark_loudly(self):
print("WOOF WOOF")
def bark_softly(self):
print("woof woof")
while True:
d = LongRunning()
d.bark = d.bark_loudly
d.bark()
d.bark = d.bark_softly
d.bark()
An alternative solution to using weakref is to dynamically bind the function to the instance only when it is called by overriding __getattr__ or __getattribute__ on the class to return func.__get__(self, type(self)) instead of just func for functions bound to the instance. This is how functions defined on the class behave. Unfortunately (for some use cases) python doesn't perform the same logic for functions attached to the instance itself, but you can modify it to do this. I've had similar problems with descriptors bound to instances. Performance here probably isn't as good as using weakref, but it is an option that will work transparently for any dynamically assigned function with the use of only python builtins.
If you find yourself doing this often, you might want a custom metaclass that does dynamic binding of instance-level functions.
Another alternative is to add the function directly to the class, which will then properly perform the binding when it's called. For a lot of use cases, this would have some headaches involved: namely, properly namespacing the functions so they don't collide. The instance id could be used for this, though, since the id in cPython isn't guaranteed unique over the life of the program, you'd need to ponder this a bit to make sure it works for your use case... in particular, you probably need to make sure you delete the class function when an object goes out of scope, and thus its id/memory address is available again. __del__ is perfect for this :). Alternatively, you could clear out all methods namespaced to the instance on object creation (in __init__ or __new__).
Another alternative (rather than messing with python magic methods) is to explicitly add a method for calling your dynamically bound functions. This has the downside that your users can't call your function using normal python syntax:
class MyClass(object):
def dynamic_func(self, func_name):
return getattr(self, func_name).__get__(self, type(self))
def call_dynamic_func(self, func_name, *args, **kwargs):
return getattr(self, func_name).__get__(self, type(self))(*args, **kwargs)
"""
Alternate without using descriptor functionality:
def call_dynamic_func(self, func_name, *args, **kwargs):
return getattr(self, func_name)(self, *args, **kwargs)
"""
Just to make this post complete, I'll show your weakref option as well:
import weakref
inst = MyClass()
def func(self):
print 'My func'
# You could also use the types modules, but the descriptor method is cleaner IMO
inst.func = func.__get__(weakref.ref(inst), type(inst))
use eval()
In [1]: int('25.0')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-1-67d52e3d0c17> in <module>
----> 1 int('25.0')
ValueError: invalid literal for int() with base 10: '25.0'
In [2]: int(float('25.0'))
Out[2]: 25
In [3]: eval('25.0')
Out[3]: 25.0

why are my weakrefs dead in the water when they point to a method? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why doesn't the weakref work on this bound method?
I'm using weakrefs in an observer-pattern and noticed an interesting phenomenon. If I create an object and add one of it's methods as an observer of an Observable, the reference is dead almost instantly. Can anyone explain what is happening?
I'm also interested in thoughts for why this might be a bad idea. I've decided not to use the weakrefs and just make sure to clean up after myself properly with Observable.removeobserver, but my curiosity is killing me here.
Here's the code:
from weakref import ref
class Observable:
__observers = None
def addobserver(self, observer):
if not self.__observers:
self.__observers = []
self.__observers.append(ref(observer))
print 'ADDING observer', ref(observer)
def removeobserver(self, observer):
self.__observers.remove(ref(observer))
def notify(self, event):
for o in self.__observers:
if o() is None:
print 'observer was deleted (removing)', o
self.__observers.remove(o)
else:
o()(event)
class C(Observable):
def set(self, val):
self.notify(val)
class bar(object):
def __init__(self):
self.c = C()
self.c.addobserver(self.foo)
print self.c._Observable__observers
def foo(self, x):
print 'callback', x #never reached
b = bar()
b.c.set(3)
and here's the output:
ADDING observer <weakref at 0xaf1570; to 'instancemethod' at 0xa106c0 (foo)>
[<weakref at 0xaf1570; dead>]
observer was deleted (removing) <weakref at 0xaf1570; dead>
the main thing to note is that the print statement after the call to addobserver shows that the weakref is already dead.
Whenever you do reference an object method, there's a bit of magic that happens, and it's that magic that's getting in your way.
Specifically, Python looks up the method on the object's class, then combines it with the object itself to create a kind of callable called a bound method. Every time e.g. the expression self.foo is evaluated, a new bound method instance is created. If you immediately take a weakref to that, then there are no other references to the bound method (even though both the object and the class's method still have live refs) and the weakref dies.
See this snippet on ActiveState for a workaround.
Each time you access a method of an instance, obj.m, a wrapper (called "bound method" is generated) that's callable an adds self (obj) as first argument when called. This is a neat solution for passing self "implicitly" and allows passing instance methods in the first place. But it also means that each time you type obj.m, a new (very lightweight) object is created, and unless you keep a (non-weak) reference to it around, it will be GC'd, because nobody will keep it alive for you.

Categories

Resources