How can I create a buffer which Python would not free? - python

I need to call a function in a C library from python, which would free() the parameter.
So I tried create_string_buffer(), but it seems like that this buffer would be freed by Python later, and this would make the buffer be freed twice.
I read on the web that Python would refcount the buffers, and free them when there is no reference. So how can I create a buffer which python would not care about it afterwards? Thanks.
example:
I load the dll with: lib = cdll.LoadLibrary("libxxx.so") and then call the function with: path = create_string_buffer(topdir) and lib.load(path). However, the load function in the libxxx.so would free its argument. And later "path" would be freed by Python, so it is freed twice

Try the following in the given order:
Try by all means to manage your memory in Python, for example using create_string_buffer(). If you can control the behaviour of the C function, modify it to not free() the buffer.
If the library function you call frees the buffer after using it, there must be some library function that allocates the buffer (or the library is broken).
Of course you could call malloc() via ctypes, but this would break all good practices on memory management. Use it as a last resort. Almost certainly, this will introduce hard to find bugs at some later time.

Related

python bytes object underlying buffer reallocation, cffi

I'm using python bytes objects to pass some data to natively implemented methods using the CFFI library, for example:
from cffi import FFI
ffi = FFI()
lib = ffi.dlopen(libname)
ffi.cdef("""
void foo(char*);
""")
x = b'abc123'
lib.foo(x)
As far as I understand, the pointer received by the native method is that of the actual underlying buffer behind the x bytes object. This works fine 99% of the time, but sometimes the pointer seems to get invalidated and the data it points to contains garbage, some time after the native call has finished - the native code keeps the pointer around after returning from the initial call and expects the data to be present there, and the python code makes sure to keep a reference to x so that the pointer, hopefully, remains valid.
In these cases, if I call a native method with the same bytes object again, I can see that I get a different pointer pointing to the same value but located at a different address, indicating that the underlying buffer behind the bytes object has moved (if my assumption about CFFI extracting a pointer to the underlying array contained by the bytes object is correct, and a temporary copy is not created anywhere), even though, to the best of my knowledge, the bytes object has not been modified in any way (the code is part of a large codebase, but I'm reasonably sure that the bytes objects do not get modified directly by code).
What could be happening here? Is my assumption about CFFI getting a pointer to the actual internal buffer of the bytes object incorrect? Is python maybe allowed to silently reallocate the buffers behind bytes objects for garbage collection / memory compaction reasons, and does that unaware of me holding a pointer to it? I'm using pypy instead of the default python interpreter, if that makes a difference.
Your guess is the correct answer. The (documented) guarantee is only that the pointer passed in this case is valid for the duration of the call.
PyPy's garbage collector can move objects in memory, if they are small enough that doing so is a win in overall performance. When doing such a cffi call, though, pypy will generally mark the object as "pinned" for the duration of the call (unless there are already too many pinned objects and adding more would seriously hurt future GC performance; in this rare case it will make a copy anyway and free it afterwards).
If your C code needs to access the memory after the call returned, you have to make explicitly a copy, eg with ffi.new("char[]", mybytes), and keep it alive as long as needed.

create shared memory around existing array (python)

Everywhere I see shared memory implementations for python (e.g. in multiprocessing), creating shared memory always allocates new memory. Is there a way to create a shared memory object and have it refer to existing memory? The purpose would be to pre-initialize the data values, or rather, to avoid having to copy into the new shared memory if we already have, say, an array in hand. In my experience, allocating a large shared array is much faster than copying values into it.
The short answer is no.
I'm the author of the Python extensions posix_ipc1 and sysv_ipc2. Like Python's multiprocessing module from the standard library, my modules are just wrappers around facilities provided by the operating system, so what you really need to know is what the OS allows when allocating shared memory. That differs a little for SysV IPC and POSIX IPC, but in this context the difference isn't really important. (I think multiprocessing uses POSIX IPC where possible.)
For SysV IPC, the OS-level call to allocate shared memory is shmget(). You can see on that call's man page that it doesn't accept a pointer to existing memory; it always allocates new memory for you. Ditto for the POSIX IPC version of the same call (shm_open()). POSIX IPC is interesting because it implements shared memory to look like a memory mapped file, so it behaves a bit differently from SysV IPC.
Regardless, whether one is calling from Python or C, there's no option to ask the operating system to turn an existing piece of private memory into shared memory.
If you think about it, you'll see why. Suppose you could pass a pointer to a chunk of private memory to shmget() or shm_open(). Now the operating system is stuck with the job of keeping that memory where it is until all sharing processes are done with it. What if it's in the middle of your stack? Suddenly this big chunk of your stack can't be allocated because other processes are using it. It also means that when your process dies, the OS can't release all its memory because some of it is now being used by other processes.
In short, what you're asking for from Python isn't offered because the underlying OS calls don't allow it, and the underlying OS calls don't allow it (probably) because it would be really messy for the OS.

Does the python C extension copy python objects directly, or modify pointers?

When parsing a python object, such as through PyArg_ParseTuple("s",&addr), does addr just get modified to the string portion of the python object's address, or is there a memory allocation call that goes somewhere? If the memory is copied, is there any way to avoid this? For context, I'm trying to pass large buffers from python into C quickly, and would like to avoid unnecessary memory copies.

How do I store a Python object in memory for use by different processes?

Here's the situation: I have a massive object that needs to be loaded into memory. So big that if it is loaded in twice it will go beyond the available memory on my machine (and no, I can't upgrade the memory). I also can't divide it up into any smaller pieces. For simplicity's sake, let's just say the object is 600 MB and I only have 1 GB of RAM. I need to use this object from a web app, which is running in multiple processes, and I don't control how they're spawned (a third party load balancer does that), so I can't rely on just creating the object in some master thread/process and then spawning off children. This also eliminates the possibility of using something like POSH because that relies on it's own custom fork call. I also can't use something like a SQLite memory database, mmap or the posix_ipc, sysv_ipc, and shm modules because those act as a file in memory, and this data has to be an object for me to use it. Using one of those I would have to read it as a file and then turn it into an object in each individual process and BAM, segmentation fault from going over the machine's memory limit because I just tried to load in a second copy.
There must be someway to store a Python object in memory (and not as a file/string/serialized/pickled) and have it be accessible from any process. I just don't know what it is. I've looked all over StackOverflow and Google and can't find the answer to this, so I'm hoping somebody can help me out.
http://docs.python.org/library/multiprocessing.html#sharing-state-between-processes
Look for shared memory, or Server process. After re-reading your post Server process sounds closer to what you want.
http://en.wikipedia.org/wiki/Shared_memory
There must be someway to store a Python object in memory (and not as a
file/string/serialized/pickled) and have it be accessible from any
process.
That isn't the way in works. Python object reference counting and an object's internal pointers do not make sense across multiple processes.
If the data doesn't have to be an actual Python object, you can try working on the raw data stored in mmap() or in a database or somesuch.
I would implement this as a C module that gets imported into each Python script. Then the interface to this large object would be implemented in C, or some combination of C and Python.

Python C API and data persistent in memory?

I'm considering integrating some C code into a Python system (Django), and I was considering using the Python / C API. The alternative is two separate processes with IPC, but I'm looking into direct interaction first. I'm new to Python so I'm trying to get a feel for the right direction to take.
Is it possible for a call to a C initialiser function to malloc a block of memory (and put something in it) and return a handle to it back to the Python script (pointer to the start of the memory block). The allocated memory should remain on the heap after the init function returns. The Python script can then call subsequent C functions (passing as an argument the pointer to the start of memory) and the function can do some thinking and return a value to the Python script. Finally, there's another C function to deallocate the memory.
Assume that the application is single-threaded and that after the init function, the memory is only read from so concurrency isn't an issue. The amount of memory will be a few hundred megabytes.
Is this even possible? Will Python let me malloc from the heap and allow it to stay there? Will it come from the Python process's memory? Will Python try and clear it up (i.e. does it do its own memory allocation and not expect any other processes to interfere with its address space)?
Could I just return the byte array as a Python managed string (or similar datatype) and pass the reference back as an argument to the C call? Would Python be OK with such a large string?
Would I be better off doing this with a separate process and IPC?
Cython
You can certainly use the C API to do what you want. You'll create a class in C, which can hold onto any memory it wants. That memory doesn't have to be exposed to Python at all if you don't want.
If you are comfortable building C DLLs, and don't need to perform Python operations in C, then ctypes might be your best bet.

Categories

Resources