I'm using PyGtk.
Will a runtime-generated function connected to the signal "drag_data_get" of a widget be garbage-collected when the widget is destroyed ?
Same question about the Gtk.TargetList that are created and associated with drag source/dest target?
I did found Python and GTK+: How to create garbage collector friendly objects? but it does not help too much.
In short: yes, it does, dynamically created functions are created just like any other Python objects created during run-time.
Longer answer: For resources managed by the garbage collector, such as objects not tied to an external resource, Python and PyGTK will correctly dispose of unused objects. For external resources, such as open files or running threads, you need to take steps to ensure their correct cleanup. To answer your question precisely, it would be useful to see concrete code. In general, the following things apply to Python and GTK:
Python objects, including dynamically created functions, are deallocated some time after they can no longer be reached from Python. In some cases deallocation happens immediately after the object becomes unreachable (if the object is not involved in reference cycles), while in others you must wait for the garbage collector to kick in.
Destroying a widget causes GTK resources associated with the widget to be cleared immediately. The object itself can remain alive. Callbacks reachable through the widget should be dereferenced immediately and, provided nothing else holds on to them from Python, soon deallocated.
You can use the weak reference type from the weakref module to test this. For example:
>>> import gtk
>>>
>>> def report_death(obj):
... # arrange for the death of OBJ to be announced
... def announce(wr):
... print 'gone'
... import weakref
... report_death.wr = weakref.ref(obj, announce)
...
>>> def make_dynamic_handler():
... def handler():
... pass
... # for debugging - we want to know when the handler is freed
... report_death(handler)
... return handler
...
>>> w = gtk.Window()
>>> w.connect('realize', make_dynamic_handler())
10L
>>> w.destroy()
gone
Now, if you change the code to handler to include a circular reference, e.g. by modifying it to mention itself:
def handler():
handler # closure with circular reference
...the call to destroy will no longer cause gone to be immediately printed - that will require the program to keep working, or an explicit call to gc.collect(). In most Python and PyGTK programs automatic deallocation "just works" and you don't need to make an effort to help it.
Ultimately, the only reliable test whether there is a memory leak is running the suspect code in an infinite loop and monitoring the memory consumption of the process - if it grows without bounds, something is not getting deallocated and you have a memory leak.
Related
I have a question of functionality that I am uncertain about. I'm not stuck on code implementation, so there is no need to share code for this question. I use
layout.removeWidget
widget.deleteLater()
a couple/few times in a project I'm building. I once read here on SO that deleteLater() does not function as described when deleting a widget with child widgets. The comment said that child widgets do not get deleted correctly, and will stay in memory, which could lead to memory bloat/leaks. Simply put (my question), is this true?
The docs mention nothing of this that I could find. And it was just one comment from years back, that was actually written for PyQt5 (or 4), I believe. So, have any of you guys done any tests on this? Is this a bug in older versions of PyQt that has been fixed, or was the commentor outright wrong?
As you can see from my question, my issue isn't how to write something, it's about behind the scenes functionality of deleteLater().
First of all, it is important to remember that PyQt (as PySide) is a binding to Qt, which is written in C++.
All Qt objects and functions are accessed from python using wrappers.
When a Qt object is created from Python, there are actually two objects:
the C++ object;
the python object that allows us to use the above Qt object;
The lifespan of those two objects may not always coincide. In fact, the C++ object can be destroyed while the python reference remains, or the other way around.
C++ object destruction
deleteLater() is guaranteed to destroy the C++ objects along with any object for which they have ownership (including indirect ownership for grand[grand, ...]children). Note that the object is not destroyed immediately, but only as soon as control returns to the event loop: then the object emits the destroyed() signal and after that all its children are destroyed along with it, recursively.
This will not delete the python reference, if it existed. For instance:
class MyLabel(QLabel):
def __init__(self, value, parent=None):
super().__init__(value, parent)
self.value = value
If you keep a python reference to an instance of the class above, you will still be able to access its self.value even after the object has been destroyed with deleteLater(). If you try to access any Qt functions or properties, instead, you'll get an exception:
>>> print(mylabel.value)
'some value'
>>> print(mylabel.text())
RuntimeError: wrapped C/C++ object of type MyLabel has been deleted
So, the python object will obviously keep using memory resources until it get garbage collected (its reference count becomes 0). This can be a problem if you keep references to objects that have large memory footprint on their python side (i.e., a large numpy array).
Python object destruction
Deleting a python reference doesn't guarantee the destruction of the Qt object, as it only deletes the python object. For instance:
>>> parent = QWidget()
>>> label = MyLabel('hello', parent)
>>> parent.show()
>>> del label
>>> print(parent.findChild(QLabel).text())
'hello'
This is because when an object is added to a parent, that parent takes ownership of the child.
Note that Qt objects that have no parent will also get destroyed when all python references are destroyed as well.
That is the reason of one of the most common questions here on SO, when somebody is trying to create another window using a local reference and without any parent object, but the window is not shown because it gets immediately garbage collected.
Note that this is valid for Qt6 as it is for Qt5 and as it was for Qt4, and it is also for their relative python bindings.
Considering what written in the first part:
>>> parent = QWidget()
>>> label = MyLabel('hello', parent)
>>> parent.show()
>>> del parent
>>> print(label.value)
'hello'
>>> print(label.text())
RuntimeError: wrapped C/C++ object of type MyLabel has been deleted
So, what must be always kept in mind is where objects live, including their C++ and python properties and attributes. If you're worried about memory usage for standard (not python-subclassed) widgets, deleteLater() is always guaranteed to release the memory resources of those widgets, and you only need to ensure that no remaining python reference still exists (but that would be true for any python program).
Finally, some considerations:
in some very complex situations it's not always possible to keep track of all python references, so it's possible that even if you destroyed a parent, the references to some children objects still exist;
not all Qt objects are QObjects, and their usage, behavior or memory management depends both on Qt and the C++ implementation; those object can be sometimes destroyed when their python references are, or when Qt "decides so";
not all QObjects take ownership of other QObjects when they are "added" to them: one common case is QAction, which can be shared among many objects (QMenu, QMenuBar, QToolBar, and even standard QWidgets); always look for the "...takes ownership" phrase in the documentation whenever in doubt;
I was reading about different ways to clean up objects in Python, and I have stumbled upon these questions (1, 2) which basically say that cleaning up using __del__() is unreliable and the following code should be avoid:
def __init__(self):
rc.open()
def __del__(self):
rc.close()
The problem is, I'm using exactly this code, and I can't reproduce any of the issues cited in the questions above. As far as my knowledge goes, I can't go for the alternative with with statement, since I provide a Python module for a closed-source software (testIDEA, anyone?) This software will create instances of particular classes and dispose of them, these instances have to be ready to provide services in between. The only alternative to __del__() that I see is to manually call open() and close() as needed, which I assume will be quite bug-prone.
I understand that when I'll close the interpreter, there's no guarantee that my objects will be destroyed correctly (and it doesn't bother me much, heck, even Python authors decided it was OK). Apart from that, am I playing with fire by using __del__() for cleanup?
You observe the typical issue with finalizers in garbage collected languages. Java has it, C# has it, and they all provide a scope based cleanup method like the Python with keyword to deal with it.
The main issue is, that the garbage collector is responsible for cleaning up and destroying objects. In C++ an object gets destroyed when it goes out of scope, so you can use RAII and have well defined semantics. In Python the object goes out of scope and lives on as long as the GC likes. Depending on your Python implementation this may be different. CPython with its refcounting based GC is rather benign (so you rarely see issues), while PyPy, IronPython and Jython might keep an object alive for a very long time.
For example:
def bad_code(filename):
return open(filename, 'r').read()
for i in xrange(10000):
bad_code('some_file.txt')
bad_code leaks a file handle. In CPython it doesn't matter. The refcount drops to zero and it is deleted right away. In PyPy or IronPython you might get IOErrors or similar issues, as you exhaust all available file descriptors (up to ulimit on Unix or 509 handles on Windows).
Scope based cleanup with a context manager and with is preferable if you need to guarantee cleanup. You know exactly when your objects will be finalized. But sometimes you cannot enforce this kind of scoped cleanup easily. Thats when you might use __del__, atexit or similar constructs to do a best effort at cleaning up. It is not reliable but better than nothing.
You can either burden your users with explicit cleanup or enforcing explicit scopes or you can take the gamble with __del__ and see some oddities now and then (especially interpreter shutdown).
There are a few problems with using __del__ to run code.
For one, it only works if you're actively keeping track of references, and even then, there's no guarantee that it will be run immediately unless you're manually kicking off garbage collections throughout your code. I don't know about you, but automatic garbage collection has pretty much spoiled me in terms of accurately keeping track of references. And even if you are super diligent in your code, you're also relying on other users that use your code being just as diligent when it comes to reference counts.
Two, there are lots of instances where __del__ is never going to run. Was there an exception while objects were being initialized and created? Did the interpreter exit? Is there a circular reference somewhere? Yep, lots that could go wrong here and very few ways to cleanly and consistently deal with it.
Three, even if it does run, it won't raise exceptions, so you can't handle exceptions from them like you can with other code. It's also nearly impossible to guarantee that the __del__ methods from various objects will run in any particular order. So the most common use case for destructors - cleaning up and deleting a bunch of objects - is kind of pointless and unlikely to go as planned.
If you actually want code to run, there are much better mechanisms -- context managers, signals/slots, events, etc.
If you're using CPython, then __del__ fires perfectly reliably and predictably as soon as an object's reference count hits zero. The docs at https://docs.python.org/3/c-api/intro.html state:
When an object’s reference count becomes zero, the object is deallocated. If it contains references to other objects, their reference count is decremented. Those other objects may be deallocated in turn, if this decrement makes their reference count become zero, and so on.
You can easily test and see this immediate cleanup happening yourself:
>>> class Foo:
... def __del__(self):
... print('Bye bye!')
...
>>> x = Foo()
>>> x = None
Bye bye!
>>> for i in range(5):
... print(Foo())
...
<__main__.Foo object at 0x7f037e6a0550>
Bye bye!
<__main__.Foo object at 0x7f037e6a0550>
Bye bye!
<__main__.Foo object at 0x7f037e6a0550>
Bye bye!
<__main__.Foo object at 0x7f037e6a0550>
Bye bye!
<__main__.Foo object at 0x7f037e6a0550>
Bye bye!
>>>
(Though if you want to test stuff involving __del__ at the REPL, be aware that the last evaluated expression's result gets stored as _, which counts as a reference.)
In other words, if your code is strictly going to be run in CPython, relying on __del__ is safe.
In this Python code
import gc
gc.disable()
<some code ...>
MyClass()
<more code...>
I am hoping that the anonymous object created by MyClass constructor would not be garbage-collected. MyClass actually links to a shared object library of C++ code, and there through raw memory pointers I am able to inspect the contents of the anonymous object.
I can then see that the object is immediately corrupted (garbage collected).
How to prevent Python garbage collection for everything?
I have to keep this call anonymous. I cannot change the part of the code MyClass() - it has to be kept as is.
MyClass() has to be kept as is, because it is an exact translation from C++ (by way of SWIG) and the two should be identical for the benefit of people who translate.
I have to prevent the garbage collection by some "initialization code", that is only called once at the beginning of the program. I cannot touch anything after that.
The "garbage collector" referred to in gc is only used for resolving circular references. In Python (at least in the main C implementation, CPython) the main method of memory management is reference counting. In your code, the result of MyClass() has no references, so will always be disposed immediately. There's no way of preventing that.
What is not clear, even with your edit, is why you can't simply assign it to something? If the target audience is "people who translate", those people can presumably read, so write a comment explaining why you're doing the assignment.
I'm trying to implement a clean-up routine in a utility module I have. In looking around for solutions to my problem, I finally settled on using a weakref callback to do my cleanup. However, I'm concerned that it won't work as expected because of a strong reference to the object from within the same module. To illustrate:
foo_lib.py
class Foo(object):
_refs = {}
def __init__(self, x):
self.x = x
self._weak_self = weakref.ref(self, Foo._clean)
Foo._refs[self._weak_self] = x
#classmethod
def _clean(cls, ref):
print 'cleaned %s' % cls._refs[ref]
foo = Foo()
Other classes then reference foo_lib.foo. I did find an old document from 1.5.1 that sort of references my concerns (http://www.python.org/doc/essays/cleanup/) but nothing that makes me fully comfortable that foo will be released in such a way that the callback will be triggered reliably. Can anyone point me towards some docs that would clear this question up for me?
The right thing to do here is to explicitly release your strong reference at some point, rather than counting on shutdown to do it.
In particular, if the module is released, its globals will be released… but it doesn't seem to be documented anywhere that the module will get released. So, there may still be a reference to your object at shutdown. And, as Martijn Pieters pointed out:
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
However, if you can ensure that there are no (non-weak) references to your object some time before the interpreter exits, you can guarantee that your cleanup runs.
You can use atexit handlers to explicitly clean up after yourself, but you can just do it explicitly before falling off the end of your main module (or calling sys.exit, or finishing your last non-daemon thread, or whatever). The simplest thing to do is often to take your entire main function and wrap it in a with or try/finally.
Or, even more simply, don't try to put cleanup code into __del__ methods or weakref callbacks; just put the cleanup code itself into your with or finally or atexit.
In a comment on another answer:
what I'm actually trying to do is close out a subprocess that is normally kept open by a timer, but needs to be nuked when the program exits. Is the only really "reliable" way to do this to start a daemonic subprocess to monitor and kill the other process separately?
The usual way to do this kind of thing is to replace the timer with something signalable from outside. Without knowing your app architecture and what kind of timer you're using (e.g., a single-threaded async server where the reactor kicks the timer vs. a single-threaded async GUI app where an OS timer message kicks the timer vs. a multi-threaded app where the timer is just a thread that sleeps between intervals vs. …), it's hard to explain more specifically.
Meanwhile, you may also want to look at whether there's a simpler way to handle your subprocesses. For example, maybe using an explicit process group, and killing your process group instead of your process (which will kill all of the children, on both Windows and Unix… although the details are very different)? Or maybe give the subprocess a pipe and have it quit when the other end of the pipe goes down?
Note that the documentation also gives you no guarantees about the order in which left-over references are deleted, if they are. In fact, if you're using CPython, Py_Finalize specifically says that it's "done in random order".
The source is interesting. It's obviously not explicitly randomized, and it's not even entirely arbitrary. First it does GC collect until nothing is left, then it finalizes the GC itself, then it does a PyImport_Cleanup (which is basically just sys.modules.clear()), then there's another collect commented out (with some discussion as to why), and finally a _PyImport_Fini (which is defined only as "For internal use only").
But this means that, assuming your module really is holding the only (non-weak) reference(s) to your object, and there are no unbreakable cycles involving the module itself, your module will get cleaned up at shutdown, which will drop the last reference to your object, causing it to get cleaned up as well. (Of course you cannot count on anything other than builtins, extension modules, and things you have a direct reference to still existing at this point… but your code above should be fine, because foo can't be cleaned up before Foo, and it doesn't rely on any other non-builtins.)
Keep in mind that this is CPython-specific—and in fact CPython 3.3-specific; you will want to read the relevant equivalent source for your version to be sure. Again, the documentation explicitly says things get deleted "in random order", so that's what you have to expect if you don't want to rely on implementation-specific behavior.
Of course your cleanup code still isn't guaranteed to be called. For example, an unhandled signal (on Unix) or structured exception (on Windows) will kill the interpreter without giving it a chance to clean up anything. And even if you write handlers for that, someone could always pull the power cord. So, if you need a completely robust design, you need to be interruptable without cleanup at any point (by journaling, using atomic file operations, protocols with explicit acknowledgement, etc.).
Python modules are cleaned up when exiting, and any __del__ methods probably are called:
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
Names starting with an underscore are cleared first:
Starting with version 1.5, Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the __del__() method is called.
Weak reference callbacks rely on the same mechanisms as __del__ methods do; the C deallocation functions (type->tp_dealloc).
The foo instance will retain a reference to the Foo._clean class method, but the global name Foo could be cleared already (it is assigned None in CPython); your method should be safe as it never refers to Foo once the callback has been registered.
I create a couple of worker processes using Python's Multiprocessing module 2.6.
In each worker I use the standard logging module (with log rotation and file per worker)
to keep an eye on the worker. I've noticed that after a couple of hours that no more
events are written to the log. The process doesn't appear to crash and still responds
to commands via my queue. Using lsof I can see that the log file is no longer open.
I suspect the log object may be killed by the garbage collector, if so is there a way
that I can mark it to protect it?
I agree with #THC4k. This doesn't seem like a GC issue. I'll give you my reasons why, and I'm sure somebody will vote me down if I'm wrong (if so, please leave a comment pointing out my error!).
If you're using CPython, it primarily uses reference counting, and objects are destroyed immediately when the ref count goes to zero (since 2.0, supplemental garbage collection is also provided to handle the case of circular references). Keep a reference to your log object and it won't be destroyed.
If you're using Jython or IronPython, the underlying VM does the garbage collection. Again, keep a reference and the GC shouldn't touch it.
Either way, it seems that either you're not keeping a reference to an object you need to keep alive, or you have some other error.
http://docs.python.org/reference/datamodel.html#object.__del__
According to this documentation the del() method is called on object destruction and you can at this point create a reference to the object to prevent it from being collected. I am not sure how to do this, hopefully this gives you some food for thought.
You could run gc.collect() immediately after fork() to see if that causes the log to be closed. But it's not likely garbage collection would take effect only after a few hours.