class Package:
def __init__(self):
self.files = []
# ...
def __del__(self):
for file in self.files:
os.unlink(file)
__del__(self) above fails with an AttributeError exception. I understand Python doesn't guarantee the existence of "global variables" (member data in this context?) when __del__() is invoked. If that is the case and this is the reason for the exception, how do I make sure the object destructs properly?
I'd recommend using Python's with statement for managing resources that need to be cleaned up. The problem with using an explicit close() statement is that you have to worry about people forgetting to call it at all or forgetting to place it in a finally block to prevent a resource leak when an exception occurs.
To use the with statement, create a class with the following methods:
def __enter__(self)
def __exit__(self, exc_type, exc_value, traceback)
In your example above, you'd use
class Package:
def __init__(self):
self.files = []
def __enter__(self):
return self
# ...
def __exit__(self, exc_type, exc_value, traceback):
for file in self.files:
os.unlink(file)
Then, when someone wanted to use your class, they'd do the following:
with Package() as package_obj:
# use package_obj
The variable package_obj will be an instance of type Package (it's the value returned by the __enter__ method). Its __exit__ method will automatically be called, regardless of whether or not an exception occurs.
You could even take this approach a step further. In the example above, someone could still instantiate Package using its constructor without using the with clause. You don't want that to happen. You can fix this by creating a PackageResource class that defines the __enter__ and __exit__ methods. Then, the Package class would be defined strictly inside the __enter__ method and returned. That way, the caller never could instantiate the Package class without using a with statement:
class PackageResource:
def __enter__(self):
class Package:
...
self.package_obj = Package()
return self.package_obj
def __exit__(self, exc_type, exc_value, traceback):
self.package_obj.cleanup()
You'd use this as follows:
with PackageResource() as package_obj:
# use package_obj
The standard way is to use atexit.register:
# package.py
import atexit
import os
class Package:
def __init__(self):
self.files = []
atexit.register(self.cleanup)
def cleanup(self):
print("Running cleanup...")
for file in self.files:
print("Unlinking file: {}".format(file))
# os.unlink(file)
But you should keep in mind that this will persist all created instances of Package until Python is terminated.
Demo using the code above saved as package.py:
$ python
>>> from package import *
>>> p = Package()
>>> q = Package()
>>> q.files = ['a', 'b', 'c']
>>> quit()
Running cleanup...
Unlinking file: a
Unlinking file: b
Unlinking file: c
Running cleanup...
A better alternative is to use weakref.finalize. See the examples at Finalizer Objects and Comparing finalizers with __del__() methods.
As an appendix to Clint's answer, you can simplify PackageResource using contextlib.contextmanager:
#contextlib.contextmanager
def packageResource():
class Package:
...
package = Package()
yield package
package.cleanup()
Alternatively, though probably not as Pythonic, you can override Package.__new__:
class Package(object):
def __new__(cls, *args, **kwargs):
#contextlib.contextmanager
def packageResource():
# adapt arguments if superclass takes some!
package = super(Package, cls).__new__(cls)
package.__init__(*args, **kwargs)
yield package
package.cleanup()
def __init__(self, *args, **kwargs):
...
and simply use with Package(...) as package.
To get things shorter, name your cleanup function close and use contextlib.closing, in which case you can either use the unmodified Package class via with contextlib.closing(Package(...)) or override its __new__ to the simpler
class Package(object):
def __new__(cls, *args, **kwargs):
package = super(Package, cls).__new__(cls)
package.__init__(*args, **kwargs)
return contextlib.closing(package)
And this constructor is inherited, so you can simply inherit, e.g.
class SubPackage(Package):
def close(self):
pass
Here is a minimal working skeleton:
class SkeletonFixture:
def __init__(self):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
def method(self):
pass
with SkeletonFixture() as fixture:
fixture.method()
Important: return self
If you're like me, and overlook the return self part (of Clint Miller's correct answer), you will be staring at this nonsense:
Traceback (most recent call last):
File "tests/simplestpossible.py", line 17, in <module>
fixture.method()
AttributeError: 'NoneType' object has no attribute 'method'
Hope it helps the next person.
I don't think that it's possible for instance members to be removed before __del__ is called. My guess would be that the reason for your particular AttributeError is somewhere else (maybe you mistakenly remove self.file elsewhere).
However, as the others pointed out, you should avoid using __del__. The main reason for this is that instances with __del__ will not be garbage collected (they will only be freed when their refcount reaches 0). Therefore, if your instances are involved in circular references, they will live in memory for as long as the application run. (I may be mistaken about all this though, I'd have to read the gc docs again, but I'm rather sure it works like this).
I think the problem could be in __init__ if there is more code than shown?
__del__ will be called even when __init__ has not been executed properly or threw an exception.
Source
Just wrap your destructor with a try/except statement and it will not throw an exception if your globals are already disposed of.
Edit
Try this:
from weakref import proxy
class MyList(list): pass
class Package:
def __init__(self):
self.__del__.im_func.files = MyList([1,2,3,4])
self.files = proxy(self.__del__.im_func.files)
def __del__(self):
print self.__del__.im_func.files
It will stuff the file list in the del function that is guaranteed to exist at the time of call. The weakref proxy is to prevent Python, or yourself from deleting the self.files variable somehow (if it is deleted, then it will not affect the original file list). If it is not the case that this is being deleted even though there are more references to the variable, then you can remove the proxy encapsulation.
It seems that the idiomatic way to do this is to provide a close() method (or similar), and call it explicitely.
A good idea is to combine both approaches.
To implement a context manager for explicit life-cycle handling. As well as handle cleanup in case the user forgets it or it is not convenient to use a with statement. This is best done by weakref.finalize.
This is how many libraries actually do it. And depending on the severity, you could issue a warning.
It is guaranteed to be called exactly once, so it is safe to call it at any time before.
import os
from typing import List
import weakref
class Package:
def __init__(self):
self.files = []
self._finalizer = weakref.finalize(self, self._cleanup_files, self.files)
#staticmethod
def _cleanup_files(files: List):
for file in files:
os.unlink(file)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self._finalizer()
weakref.finalize returns a callable finalizer object which will be called when obj is garbage collected. Unlike an ordinary weak reference, a finalizer will always survive until the reference object is collected, greatly simplifying lifecycle management."
Unlike atexit.register the object is not held in memory until the interpreter is shut down.
And unlike object.__del__, weakref.finalize is guaranteed to be called at interpreter shutdown. So it is much more safe.
atexit.register is the standard way as has already been mentioned in ostrakach's answer.
However, it must be noted that the order in which objects might get deleted cannot be relied upon as shown in example below.
import atexit
class A(object):
def __init__(self, val):
self.val = val
atexit.register(self.hello)
def hello(self):
print(self.val)
def hello2():
a = A(10)
hello2()
a = A(20)
Here, order seems legitimate in terms of reverse of the order in which objects were created as program gives output as :
20
10
However when, in a larger program, python's garbage collection kicks in object which is out of it's lifetime would get destructed first.
Related
i have read a little about multiprocessing and pickling problems, I have also read that there are some solutions but I don't really know how can they help to mine situation.
I am building Test Runner where I use Multiprocessing to call modified Test Class methods. Modified by metaclass so I can have setUp and tearDown methods before and after each run test.
Here is my Parent Metaclass:
class MetaTestCase(type):
def __new__(cls, name: str, bases: Tuple, attrs: dict):
def replaced_func(fn):
def new_test(*args, **kwargs):
args[0].before()
result = fn(*args, **kwargs)
args[0].after()
return result
return new_test
# If method is found and its name starts with test, replace it
for i in attrs:
if callable(attrs[i]) and attrs[i].__name__.startswith('test'):
attrs[i] = replaced_func(attrs[i])
return (super(MetaTestCase, cls).__new__(cls, name, bases, attrs))
I am using this Sub Class to inherit MetaClass:
class TestCase(metaclass=MetaTestCase):
def before(self) -> None:
"""Overridable, execute before test part."""
pass
def after(self) -> None:
"""Overridable, execute after test part."""
pass
And then I use this in my TestSuite Class:
class TestApi(TestCase):
def before(self):
print('before')
def after(self):
print('after')
def test_api_one(self):
print('test')
Sadly when I try to execute that test with multiprocessing.Process it fails on
AttributeError: Can't pickle local object 'MetaTestCase.__new__.<locals>.replaced_func.<locals>.new_test'
Here is how I create and execute Process:
module = importlib.import_module('tests.api.test_api') # Finding and importing module
object = getattr(module, 'TestApi') # Getting Class from module
process = Process(target=getattr(object, 'test_api_one')) # Calling class method
process.start()
process.join()
I tried to use pathos.helpers.mp.Process, it passes pickling phase I guess but has some problems with Tuple that I don't understand:
Process Process-1:
Traceback (most recent call last):
result = fn(*args, **kwargs)
IndexError: tuple index out of range
Is there any simple solution for that so I can pickle that object and run test sucessfully along with my modified test class?
As for your original question of why you are getting the pickling error, this answer summarizes the problem and offers solutions (similar to those already provided here).
Now as to why you are receiving the IndexError, this is because you are not passing an instance of the class to the function (the self argument). A quick fix would be to do this (also, please don't use object as a variable name):
module = importlib.import_module('tests.api.test_api') # Finding and importing module
obj = getattr(module, 'TestApi')
test_api = obj() # Instantiate!
# Pass the instance explicitly! Alternatively, you can also do target=test_api.test_api_one
process = Process(target=getattr(obj, 'test_api_one'), args=(test_api, ))
process.start()
process.join()
Ofcourse, you can also opt to make the methods of the class as class methods, and pass the target function as obj.method_name.
Also, as a quick sidenote, the usage of a metaclass for the use case shown in the example seems like an overkill. Are you sure you can't do what you want with class decorators instead (which might also be compatible with the standard library's multiprocessing)?
https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled
"The following types can be pickled... functions (built-in and user-defined) accessible from the top level of a module (using def, not lambda);"
It sounds like you cannot pickle locally defined functions. This makes sense based on other pickle behavior I've seen. Essentially it's just pickling instructions to the python interpreter for how it can find the function definition. That usually means its a module name and function name (for example) so the multiprocessing Process can import the correct function.
There's no way for another process to import your replaced_func function because it's only locally defined.
You could try defining it outside of the metaclass, which would make it importable by other processes.
I have a bit of an odd use-case. I already have a context manager for all my CLI applications which provides a lot of useful non-functional concern tasks (e.g. config parsing, logging setup, response time calculation, etc), I call it BaseCLI as follows:
class BaseCLI(object):
def __enter__(self):
# ... a lot of stuff
return config
def __exit__(self, type, value, traceback):
# exit implementation
I also have a DynamicScope implementation similar to the one in this answer https://stackoverflow.com/a/63340335/1142881. I would like to plug a dynamic scope instance to be reused as part my existing BaseCLI to do FLOP counting ... how can I do that? The DynamicScope is also a context manager and ideally I would like to do:
from somewhere import myscope
with BaseClI(...) as config:
with myscope.assign(flop_count=0):
# ... count the flops in every inner scope
This I would need to do in every CLI instance and that's not ideal .. I would instead like to achieve the same effect from the BaseCLI __enter__ implementation but don't know how ...
If I understand correctly , maybe something like this?
class BaseCLI(object):
def __init__(self, scope, ...):
self._scope = scope
def __enter__(self):
self._scope.__enter__()
# ... a lot of stuff
return config
def __exit__(self, type, value, traceback):
self._scope.__exit__(type, value, traceback)
# exit implementation
from somewhere import myscope
with BaseCLI(myscope.assign(flop_count=0),...) as config
This will create the scope, and run it's _enter_ and _exit_ methods together with the BaseCLI class
First, I wrote a recording class with a flush method:
class Recorder
def __init__(self, buffer_size, path):
self._big_buffer = np.array(*buffer_size)
self._path = path
def push(self, data):
# insert in self._big_buffer
# if self._big_buffer is full:
# self._flush()
def flush(self):
# write buffer to disk (self._path)
Then, I wanted to flush at exit: when manually stopped, crashed or whatever reason.
So I used:
def __init__(self):
(...)
atexit.register(self.flush)
And it worked pretty well.
But now, I want to record, stop recording, record again, multiple times, with a different buffer size and to a different path. So I have to discard, then instanciate several Recorder. It kind of works, but older Recorder's memory (containing some fat self._big_buffer̀) is not freed since it's retained by atexit. Even when I explicitly call del.
I can't atexit.unregister(self._flush) since it's Python 3 only.
I would prefer not to reuse existing instances, but discarding older instances and create new ones.
How would you handle such a case?
You can try using a weak reference to the atexit handler, so the object
won't be retained if it is deleted elsewhere:
import atexit
import weakref
class CallableMethodWeakRef:
def __init__(self, object, method_name):
self.object_ref = weakref.ref(object)
self.method_name = method_name
def __call__(self):
object = self.object_ref()
if object:
getattr(object, self.method_name)()
class Recorder:
def __init__(self, *args):
atexit.register(CallableMethodWeakRef(self, 'flush'))
def flush(self):
print 'flushing'
The method is passed as a string in order to avoid a lot of problems with
bound method weak references, if you find it disturbing you can always use
a BoundMethodWeakref implementation like this one: http://code.activestate.com/recipes/578298-bound-method-weakref/
I would say you're trying to use the wrong tool. The with statement and context managers are a very good tool for this. File IO is the main example that most python users will get introduced to the with statement.
f = open("somefile.txt", "w")
try:
f.write("...")
# more file operations
finally:
# regardless of what happens, make sure the files is closed
f.close()
Becomes:
with open("somefile.txt", "w") as f:
f.write("...")
# more file operations
# close automatically called at the end of the block
You can create your own context managers by writing __enter__ and __exit__ methods for your class.
class Recorder
def __init__(self, buffer_size, path):
self._big_buffer = np.array(*buffer_size)
self._path = path
def push(self, data):
# insert in self._big_buffer
# if self._big_buffer is full:
# self._flush()
def flush(self):
# write buffer to disk (self._path)
def __enter__(self):
return self
def __exit__(self, exctype, exception, traceback):
# If an exception was thrown in the with block you will get the details here.
# If you want the say that the exception has been handled and for it not to be
# raised outside the with block then return True
self.flush()
# self.close() ?
You would then use your Recorder object like:
with Recorder(...) as recorder:
# operations with recorder
...
# regardless of what happens the recorder will be flushed at this point
Surely the answer is to allow your Recorder to change paths and buffer characteristics at will. You say "I would prefer not to reuse existing instances, but discarding older instances and create new ones." but you don't give any rationale for that,
except perhaps your assumption that the "older Recorder's memory (containing some fat self._big_buffer̀) is not freed since it's retained by atexit", which I believe is incorrect.
While it is true that atexit retains a reference to the recorder object, this will only mean that the buffer memory is retained as long as the recorder refers to it. It would be quite easy to add a close() method such as
def close(self):
self.flush()
self._big_buffer = None
and bingo! No reference to the buffer memory exists, and it is collectable.
Your __init__() method should simply register with atexit, then the open() method (which does the rest of what __init__() currently does) can be used multiple times, each one followed by a close() call.
In summary, I think your problem cries out for a single object.
You can remove the handle by hand from the (undocumented) atexit._exithandlers list.
import atexit
def unregister(func, *targs, **kargs):
"""unregister a function previously registered with atexit.
use exactly the same aguments used for before register.
"""
for i in range(0,len(atexit._exithandlers)):
if (func, targs, kargs) == atexit._exithandlers[i] :
del atexit._exithandlers[i]
return True
return False
Hope that helps.
Some background: I work in a large bank and I'm trying to re-use some Python modules, which I cannot change, only import. I also don't have the option of installing any new utilities/functions etc (running Python 2.6 on Linux).
I've got this at present:
In my module:
from common.databaseHelper import BacktestingDatabaseHelper
class mfReportProcess(testingResource):
def __init__(self):
self.db = BacktestingDatabaseHelper.fromConfig('db_name')
One of the methods called within the 'testingResource' class has this:
with self.db as handler:
which falls over with this:
with self.db as handler:
AttributeError: 'BacktestingDatabaseHelper' object has no attribute '__exit__'
and, indeed, there is no __exit__ method in the 'BacktestingDatabaseHelper' class, a class which I cannot change.
However, this code I'm trying to re-use works perfectly well for other apps - does anyone know why I get this error and no-one else?
Is there some way of defining __exit__ locally?
Many thanks in advance.
EDITED to add:
I've tried to add my own class to setup DB access but can't get it to work - added this to my module:
class myDB(BacktestingDatabaseHelper):
def __enter__(self):
self.db = fromConfig('db_name')
def __exit__(self):
self.db.close()
and added:
self.db = myDB
into my init attribute for my main class but I get this error:
with self.db as handler:
TypeError: unbound method __enter__() must be called with myDB instance as first argument (got nothing instead)
Any suggestions as to how to do this properly?
Using the with protocol assumes that the object used in with implements the context manager protocol.
Basically this means that the class definition should have __enter__() and __exit__() methods defined. If you use an object without these, python will throw an AttributeError complaining about the missing __exit__ attribute.
The error means that BacktestingDatabaseHelper is not designed to be used in a with statement. Sounds like the classes testingResource and BacktestingDatabaseHelper are not compatible with each other (perhaps your version of common.databaseHelper is out of date).
As you cannot change the with statement, you must add a class deriving from BacktestingDatabaseHelper which adds appropriate __enter__() and __exit__() functions and use this instead.
Here is an example which tries to be as close to the original as possible:
class myDB(BacktestingDatabaseHelper):
def __enter__(self):
return self
def __exit__(self):
self.db.close()
def fromConfig(self, name):
x = super(myDB, self).fromConfig(name)
assert isinstance(x, BacktestingDatabaseHelper)
x.__class__ = myDB # not sure if that really works
[...]
self.db=myDB.fromConfig('tpbp')
The problem is, however, that I am not sure what the __enter__ is supposed to return. If you take MySQLdb, for example, the context manager of the connection creates a cursor representing one transaction. If that's the case here as well, wou have to work out something else...
You might want to try the contextlib.contextmanager decorator to wrap your object so that it supports the context manager protocol.
The 'with' keyword is basically a shortcut for writing out:
try:
// Do something
finally:
hander.__exit__()
which is useful if your handler object is using up resources (like, for example, an open file stream). It makes sure that no matter what happens in the 'do something' part, the resource is released cleanly.
In your case, your handler object doesn't have an __exit__ method, and so with fails. I would assume that other people can use BacktestingDatabaseHelper because they're not using with.
As for what you can do now, I would suggest forgetting with and using try ... finally instead, rather than trying to add your own version of __exit__ to the object. You'll just have to make sure you release the handler properly (how you do this will depend on how BacktestingDatabaseHelper is supposed to be used), e.g.
try:
handler = self.db
// do stuff
finally:
handler.close()
Edit:
Since you can't change it, you should do something like #Daniel Roseman suggests to wrap BacktestingDatabaseHelper. Depending on how best to clean up BacktestingDatabaseHelper (as above), you can write something like:
from contextlib import contextmanager
#contextmanager
def closing(thing):
try:
yield thing
finally:
thing.close()
and use this as:
class mfReportProcess(testingResource):
def __init__(self):
self.db = closing(BacktestingDatabaseHelper.fromConfig('db_name'))
(this is directly from the documentation).
Can someone explain why the following code behaves the way it does:
import types
class Dummy():
def __init__(self, name):
self.name = name
def __del__(self):
print "delete",self.name
d1 = Dummy("d1")
del d1
d1 = None
print "after d1"
d2 = Dummy("d2")
def func(self):
print "func called"
d2.func = types.MethodType(func, d2)
d2.func()
del d2
d2 = None
print "after d2"
d3 = Dummy("d3")
def func(self):
print "func called"
d3.func = types.MethodType(func, d3)
d3.func()
d3.func = None
del d3
d3 = None
print "after d3"
The output (note that the destructor for d2 is never called) is this (python 2.7)
delete d1
after d1
func called
after d2
func called
delete d3
after d3
Is there a way to "fix" the code so the destructor is called without deleting the method added? I mean, the best place to put the d2.func = None would be in the destructor!
Thanks
[edit] Based on the first few answers, I'd like to clarify that I'm not asking about the merits (or lack thereof) of using __del__. I tried to create the shortest function that would demonstrate what I consider to be non-intuitive behavior. I'm assuming a circular reference has been created, but I'm not sure why. If possible, I'd like to know how to avoid the circular reference....
You cannot assume that __del__ will ever be called - it is not a place to hope that resources are automagically deallocated. If you want to make sure that a (non-memory) resource is released, you should make a release() or similar method and then call that explicitly (or use it in a context manager as pointed out by Thanatos in comments below).
At the very least you should read the __del__ documentation very closely, and then you should probably not try to use __del__. (Also refer to the gc.garbage documentation for other bad things about __del__)
I'm providing my own answer because, while I appreciate the advice to avoid __del__, my question was how to get it to work properly for the code sample provided.
Short version: The following code uses weakref to avoid the circular reference. I thought I'd tried this before posting the question, but I guess I must have done something wrong.
import types, weakref
class Dummy():
def __init__(self, name):
self.name = name
def __del__(self):
print "delete",self.name
d2 = Dummy("d2")
def func(self):
print "func called"
d2.func = types.MethodType(func, weakref.ref(d2)) #This works
#d2.func = func.__get__(weakref.ref(d2), Dummy) #This works too
d2.func()
del d2
d2 = None
print "after d2"
Longer version:
When I posted the question, I did search for similar questions. I know you can use with instead, and that the prevailing sentiment is that __del__ is BAD.
Using with makes sense, but only in certain situations. Opening a file, reading it, and closing it is a good example where with is a perfectly good solution. You've gone a specific block of code where the object is needed, and you want to clean up the object and the end of the block.
A database connection seems to be used often as an example that doesn't work well using with, since you usually need to leave the section of code that creates the connection and have the connection closed in a more event-driven (rather than sequential) timeframe.
If with is not the right solution, I see two alternatives:
You make sure __del__ works (see this blog for a better
description of weakref usage)
You use the atexit module to run a callback when your program closes. See this topic for example.
While I tried to provide simplified code, my real problem is more event-driven, so with is not an appropriate solution (with is fine for the simplified code). I also wanted to avoid atexit, as my program can be long-running, and I want to be able to perform the cleanup as soon as possible.
So, in this specific case, I find it to be the best solution to use weakref and prevent circular references that would prevent __del__ from working.
This may be an exception to the rule, but there are use-cases where using weakref and __del__ is the right implementation, IMHO.
Instead of del, you can use the with operator.
http://effbot.org/zone/python-with-statement.htm
just like with filetype objects, you could something like
with Dummy('d1') as d:
#stuff
#d's __exit__ method is guaranteed to have been called
del doesn't call __del__
del in the way you are using removes a local variable. __del__ is called when the object is destroyed. Python as a language makes no guarantees as to when it will destroy an object.
CPython as the most common implementation of Python, uses reference counting. As a result del will often work as you expect. However it will not work in the case that you have a reference cycle.
d3 -> d3.func -> d3
Python doesn't detect this and so won't clean it up right away. And its not just reference cycles. If an exception is throw you probably want to still call your destructor. However, Python will typically hold onto to the local variables as part of its traceback.
The solution is not to depend on the __del__ method. Rather, use a context manager.
class Dummy:
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
print "Destroying", self
with Dummy() as dummy:
# Do whatever you want with dummy in here
# __exit__ will be called before you get here
This is guaranteed to work, and you can even check the parameters to see whether you are handling an exception and do something different in that case.
A full example of a context manager.
class Dummy(object):
def __init__(self, name):
self.name = name
def __enter__(self):
return self
def __exit__(self, exct_type, exce_value, traceback):
print 'cleanup:', d
def __repr__(self):
return 'Dummy(%r)' % (self.name,)
with Dummy("foo") as d:
print 'using:', d
print 'later:', d
It seems to me the real heart of the matter is here:
adding the functions is dynamic (at runtime) and not known in advance
I sense that what you are really after is a flexible way to bind different functionality to an object representing program state, also known as polymorphism. Python does that quite well, not by attaching/detaching methods, but by instantiating different classes. I suggest you look again at your class organization. Perhaps you need to separate a core, persistent data object from transient state objects. Use the has-a paradigm rather than is-a: each time state changes, you either wrap the core data in a state object, or you assign the new state object to an attribute of the core.
If you're sure you can't use that kind of pythonic OOP, you could still work around your problem another way by defining all your functions in the class to begin with and subsequently binding them to additional instance attributes (unless you're compiling these functions on the fly from user input):
class LongRunning(object):
def bark_loudly(self):
print("WOOF WOOF")
def bark_softly(self):
print("woof woof")
while True:
d = LongRunning()
d.bark = d.bark_loudly
d.bark()
d.bark = d.bark_softly
d.bark()
An alternative solution to using weakref is to dynamically bind the function to the instance only when it is called by overriding __getattr__ or __getattribute__ on the class to return func.__get__(self, type(self)) instead of just func for functions bound to the instance. This is how functions defined on the class behave. Unfortunately (for some use cases) python doesn't perform the same logic for functions attached to the instance itself, but you can modify it to do this. I've had similar problems with descriptors bound to instances. Performance here probably isn't as good as using weakref, but it is an option that will work transparently for any dynamically assigned function with the use of only python builtins.
If you find yourself doing this often, you might want a custom metaclass that does dynamic binding of instance-level functions.
Another alternative is to add the function directly to the class, which will then properly perform the binding when it's called. For a lot of use cases, this would have some headaches involved: namely, properly namespacing the functions so they don't collide. The instance id could be used for this, though, since the id in cPython isn't guaranteed unique over the life of the program, you'd need to ponder this a bit to make sure it works for your use case... in particular, you probably need to make sure you delete the class function when an object goes out of scope, and thus its id/memory address is available again. __del__ is perfect for this :). Alternatively, you could clear out all methods namespaced to the instance on object creation (in __init__ or __new__).
Another alternative (rather than messing with python magic methods) is to explicitly add a method for calling your dynamically bound functions. This has the downside that your users can't call your function using normal python syntax:
class MyClass(object):
def dynamic_func(self, func_name):
return getattr(self, func_name).__get__(self, type(self))
def call_dynamic_func(self, func_name, *args, **kwargs):
return getattr(self, func_name).__get__(self, type(self))(*args, **kwargs)
"""
Alternate without using descriptor functionality:
def call_dynamic_func(self, func_name, *args, **kwargs):
return getattr(self, func_name)(self, *args, **kwargs)
"""
Just to make this post complete, I'll show your weakref option as well:
import weakref
inst = MyClass()
def func(self):
print 'My func'
# You could also use the types modules, but the descriptor method is cleaner IMO
inst.func = func.__get__(weakref.ref(inst), type(inst))
use eval()
In [1]: int('25.0')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-1-67d52e3d0c17> in <module>
----> 1 int('25.0')
ValueError: invalid literal for int() with base 10: '25.0'
In [2]: int(float('25.0'))
Out[2]: 25
In [3]: eval('25.0')
Out[3]: 25.0