python bytes object underlying buffer reallocation, cffi - python

I'm using python bytes objects to pass some data to natively implemented methods using the CFFI library, for example:
from cffi import FFI
ffi = FFI()
lib = ffi.dlopen(libname)
ffi.cdef("""
void foo(char*);
""")
x = b'abc123'
lib.foo(x)
As far as I understand, the pointer received by the native method is that of the actual underlying buffer behind the x bytes object. This works fine 99% of the time, but sometimes the pointer seems to get invalidated and the data it points to contains garbage, some time after the native call has finished - the native code keeps the pointer around after returning from the initial call and expects the data to be present there, and the python code makes sure to keep a reference to x so that the pointer, hopefully, remains valid.
In these cases, if I call a native method with the same bytes object again, I can see that I get a different pointer pointing to the same value but located at a different address, indicating that the underlying buffer behind the bytes object has moved (if my assumption about CFFI extracting a pointer to the underlying array contained by the bytes object is correct, and a temporary copy is not created anywhere), even though, to the best of my knowledge, the bytes object has not been modified in any way (the code is part of a large codebase, but I'm reasonably sure that the bytes objects do not get modified directly by code).
What could be happening here? Is my assumption about CFFI getting a pointer to the actual internal buffer of the bytes object incorrect? Is python maybe allowed to silently reallocate the buffers behind bytes objects for garbage collection / memory compaction reasons, and does that unaware of me holding a pointer to it? I'm using pypy instead of the default python interpreter, if that makes a difference.

Your guess is the correct answer. The (documented) guarantee is only that the pointer passed in this case is valid for the duration of the call.
PyPy's garbage collector can move objects in memory, if they are small enough that doing so is a win in overall performance. When doing such a cffi call, though, pypy will generally mark the object as "pinned" for the duration of the call (unless there are already too many pinned objects and adding more would seriously hurt future GC performance; in this rare case it will make a copy anyway and free it afterwards).
If your C code needs to access the memory after the call returned, you have to make explicitly a copy, eg with ffi.new("char[]", mybytes), and keep it alive as long as needed.

Related

Accessing the memory heap in python

Is there a way to access the memory heap in Python? I'm interested in being able to access all of the objects allocated in memory of the running instance.
You can't get direct access, but the gc module should do most of what you want. A simple gc.get_objects() call will return all the objects tracked by the collector. This isn't everything since the CPython garbage collector is only concerned with potential reference cycles (so built-in types that can't refer to other objects, e.g. int, float, str, etc.) won't appear in the resulting list, but they'll all be referenced by something in that list (if they weren't, their reference count would be zero and they'd have been disposed of).
Aside from that, you might get some more targeted use out of the inspect module, especially stack frame inspection, using the traceback module for "easy formatting" or manually digging into the semi-documented frame objects themselves, either of which would allow you to narrow the scope down to a particular active scope on the stack frame.
For the closest to the heap solution, you could use the tracemalloc module to trace and record allocations as they happen, or the pdb debugger to do live introspection from the outside (possibly adding breakpoint() calls to your code to make it stop automatically when you reach that point to let you look around).

When to Py_INCREF?

I'm working on a C extension and am at the point where I want to track down memory leaks. From reading Python's documentation it's hard to understand when to increment / decrement reference count of Python objects. Also, after couple days spending trying to embed Python interpreter (in order to compile the extension as a standalone program), I had to give up this endeavor. So, tools like Valgrind are helpless here.
So far, by trial and error I learned that, for example, Py_DECREF(Py_None) is a bad thing... but is this true of any constant? I don't know.
My major confusions so far can be listed like this:
Do I have to decrement refcount on anything created by PyWhatever_New() if it doesn't outlive the procedure that created it?
Does every Py_INCREF need to be matched by Py_DECREF, or should there be one more of one / the other?
If a call to Python procedure resulted in a PyObject*, do I need to increment it to ensure that I can still use it (forever), or decrement it to ensure that eventually it will be garbage-collected, or neither?
Are Python objects created through C API on the stack allocated on stack or on heap? (It is possible that Py_INCREF reallocates them on heap for example).
Do I need to do anything special to Python objects created in C code before passing them to Python code? What if Python code outlives C code that created Python objects?
Finally, I understand that Python has both reference counting and garbage collector: if that's the case, how critical is it if I mess up the reference count (i.e. not decrement enough), will GC eventually figure out what to do with those objects?
Most of this is covered in Reference Count Details, and the rest is covered in the docs on the specific questions you're asking about. But, to get it all in one place:
Py_DECREF(Py_None) is a bad thing... but is this true of any constant?
The more general rule is that calling Py_DECREF on anything you didn't get a new/stolen reference to, and didn't call Py_INCREF on, is a bad thing. Since you never call Py_INCREF on anything accessible as a constant, this means you never call Py_DECREF on them.
Do I have to decrement refcount on anything created by PyWhatever_New()
Yes. Anything that returns a "new reference" has to be decremented. By convention, anything that ends in _New should return a new reference, but it should be documented anyway (e.g., see PyList_New).
Does every Py_INCREF need to be matched by Py_DECREF, or should there be one more of one / the other?
The number in your own code may not necessarily balance. The total number has to balance, but there are increments and decrements happening inside Python itself. For example, anything that returns a "new reference" has already done an inc, while anything that "steals" a reference will do the dec on it.
Are Python objects created through C API on the stack allocated on stack or on heap? (It is possible that Py_INCREF reallocates them on heap for example).
There's no way to create objects through C API on the stack. The C API only has functions that return pointers to objects.
Most of these objects are allocated on the heap. Some are actually in static memory.
But your code should not care anyway. You never allocate or delete them; they get allocated in the PySpam_New and similar functions, and deallocate themselves when you Py_DECREF them to 0, so it doesn't matter to you where they are.
(The except is constants that you can access via their global names, like Py_None. Those, you obviously know are in static storage.)
Do I need to do anything special to Python objects created in C code before passing them to Python code?
No.
What if Python code outlives C code that created Python objects?
I'm not sure what you mean by "outlives" here. Your extension module is not going to get unloaded while any objects depend on its code. (In fact, until at least 3.8, your module probably never going to get unloaded until shutdown.)
If you just mean the function that _New'd up an object returning, that's not an issue. You have to go very far out of your way to allocate any Python objects on the stack. And there's no way to pass things like a C array of objects, or a C string, into Python code without converting them to a Python tuple of objects, or a Python bytes or str. There are a few cases where, e.g., you could stash a pointer to something on the stack in a PyCapsule and pass that—but that's the same as in any C program, and… just don't do it.
Finally, I understand that Python has both reference counting and garbage collector
The garbage collector is just a cycle breaker. If you have objects that are keeping each other alive with a reference cycle, you can rely on the GC. But if you've leaked references to an object, the GC will never clean it up.

Swig - store unknown object until next call of C++ function

I am using swig with python to call a function myfunc() written in C++. Every time I call myfunc() I have to generate the same huge sparse matrix. What I would like to do instead, is to create the matrix once, then pass an a pointer of the matrix to python, without reallocating space every time. What I fear is that this could cause some kind of memory leak.
What is the best way to do this?
The matrix is part of Eigen::SparseMatrix.
Is it maybe save to simply pass a pointer back and forth? Python would not know how to handle it, but as long as the space stays allocated, will I be able to reuse the pointer in C++?
This is precisely how swig handles an unknown object: It passes a pointer to the object around, together with some type information (a string). If a function takes a pointer of that type as argument, swig will happily pass it that pointer. See the swig docs here.
You just have to make sure the types match up, i.e., you cannot pass say a MatrixXd* to python and use it in a function taking a MatrixBase<MatrixXd>*, since swig will not know that the types are compatible.
Also, for unknown objects (at least pointers to such), swig will not do any memory management, so you will need to allocate and deallocate the object on the C++ side.

Examine Object at Given Memory Address

Given a typical error message thrown by the python interpreter:
TypeError: <sqlalchemy.orm.dynamic.AppenderBaseQuery object at 0x3506490> is not JSON serializable
Can I use that memory address to find the offending object using the python shell?
No, you can't. The only purpose of that address is to identify the object for debugging purposes.
If you really, really want to, it's not impossible. Just hard, and a very bad idea.
In CPython, you can use ctypes to convert a number into a pointer to any type you want. And to load and call functions out of sys.executable (and/or the so/dll/framework where the actual code is) just like any other library. And to define structures that match the C API structures.
If you're really careful, you'll get a quick segfault instead of corrupting everything all to hell. If you're really, really careful, you can occasionally pull off some unsavory hacks without even segfaulting.
However, in this case, it's unlikely to do you any good. Sure, at some point there was a sqlalchemy.orm.dynamic.AppenderBaseQuery object at 0x3506490… but as soon as that object went out of scope, it probably got released, so there may be anything at that location…

How can I create a buffer which Python would not free?

I need to call a function in a C library from python, which would free() the parameter.
So I tried create_string_buffer(), but it seems like that this buffer would be freed by Python later, and this would make the buffer be freed twice.
I read on the web that Python would refcount the buffers, and free them when there is no reference. So how can I create a buffer which python would not care about it afterwards? Thanks.
example:
I load the dll with: lib = cdll.LoadLibrary("libxxx.so") and then call the function with: path = create_string_buffer(topdir) and lib.load(path). However, the load function in the libxxx.so would free its argument. And later "path" would be freed by Python, so it is freed twice
Try the following in the given order:
Try by all means to manage your memory in Python, for example using create_string_buffer(). If you can control the behaviour of the C function, modify it to not free() the buffer.
If the library function you call frees the buffer after using it, there must be some library function that allocates the buffer (or the library is broken).
Of course you could call malloc() via ctypes, but this would break all good practices on memory management. Use it as a last resort. Almost certainly, this will introduce hard to find bugs at some later time.

Categories

Resources