The use case is the following:
Given a (fixed, not changeable) DLL implemented in C
Wanted: a wrapper to this DLL implemented in python (chosen method: ctypes)
Some of the functions in the DLL need synchronization primitives. To aim for maximum flexibility, the designers of the DLL completely rely on client-provided callbacks. More precisely this DLL shall have:
a callback function to create a synchronizaton object
callback functions to acquire/release a lock on the synchronizaton object
and one callback function to destroy the synchronizaton object
Because from the viewpoint of the DLL, the synchronizaton object is opaque, it will be repesented by a void * entity. For example if one of the DLL functions wants to acquire a lock it shall do:
void* mutex;
/* get the mutex object via the create_mutex callback */
create_mutex(&mutex);
/* acquire a lock */
lock_mutex(mutex);
... etc
It can be seen, that the callback create_mutex input parameter has output semantics. This is achieved with void ** signature.
This callback (and the other three) must be implemented in python. I've failed :-) For simplicity, let's focus on only the creating callback, and also for simplicity, let the opaque object be an int.
The toy-DLL, which emulates the use of callbacks, is the following (ct_test.c):
#include <stdio.h>
#include <stdlib.h>
typedef int (* callback_t)(int**);
callback_t func;
int* global_i_p = NULL;
int mock_callback(int** ipp)
{
int* dynamic_int_p = (int *) malloc(sizeof(int));
/* dynamic int value from C */
*dynamic_int_p = 2;
*ipp = dynamic_int_p;
return 0;
}
void set_py_callback(callback_t f)
{
func = f;
}
void set_c_callback()
{
func = mock_callback;
}
void test_callback(void)
{
printf("global_i_p before: %p\n", global_i_p);
func(&global_i_p);
printf("global_i_p after: %p, pointed value:%d\n", global_i_p, *global_i_p);
/* to be nice */
if (func == mock_callback)
free(global_i_p);
}
The python code, which would like to provide the callback, and use the DLL is the following:
from ctypes import *
lib = CDLL("ct_test.so")
# "dynamic" int value from python
int = c_int(1)
int_p = pointer(int)
def pyfunc(p_p_i):
p_p_i.contents = int_p
# create callback type and instance
CALLBACK = CFUNCTYPE(c_int, POINTER (POINTER(c_int)))
c_pyfunc = CALLBACK(pyfunc)
# functions from .so
set_py_callback = lib.set_py_callback
set_c_callback = lib.set_c_callback
test_callback = lib.test_callback
# set one of the callbacks
set_py_callback(c_pyfunc)
#set_c_callback()
# test it
test_callback()
When using the in-DLL provided callback (set via set_c_callback()), this works as expected:
~/dev/test$ python ct_test.py
global_i_p before: (nil)
global_i_p after: 0x97eb008, pointed value:2
However, in the other case - with the python callback - fails:
~/dev/test$ python ct_test.py
global_i_p before: (nil)
Traceback (most recent call last):
File "/home/packages/python/2.5/python2.5-2.5.2/Modules/_ctypes/callbacks.c", line 284, in 'converting callback result'
TypeError: an integer is required
Exception in <function pyfunc at 0xa14079c> ignored
Segmentation fault
Where am I wrong?
You appear to be incorrectly defining the return type. It looks like your C callback returns an int, while the Python one you are declaring as return c_int, yet not explicitly returning anything (thus actually returning None). If you "return 0" it might stop crashing. You should do that or change the callback signature to CFUNCTYPE(None, ...etc) in any case.
Also, although it's not a current problem here, you're shadowing the "int" builtin name. This might lead to problems later.
Edited: to correctly refer to the C return type as "int", not "void".
The segfault is due to incorrect pointer handling in your Python callback. You have more levels of pointer indirection than strict necessary, which is probably the source of your confusion. In the Python callback you set p_p_i.contents, but that only changes what the Python ctypes object points at, not the underlying pointer. To do that, do pointer derefrence via array access syntax. A distilled example:
ip = ctypes.POINTER(ctypes.c_int)()
i = ctypes.c_int(99)
# Wrong way
ipp = ctypes.pointer(ip)
ipp.contents = ctypes.pointer(i)
print bool(ip) # False --> still NULL
# Right way
ipp = ctypes.pointer(ip)
ipp[0] = ctypes.pointer(i)
print ip[0] # 99 --> success!
The type error is due to a type incompatibility as described in Peter Hansen's answer.
Related
I want to create a function in python, pass it's function pointer to c and execute it there.
So my python file:
import ctypes
import example
def tester_print():
print("Hello")
my_function_ptr = ctypes.CFUNCTYPE(None)(tester_print)
example.pass_func(my_function_ptr)
And here is what my function in c looks like:
typedef void (*MyFunctionType)(void);
PyObject* pass_func(PyObject *self, PyObject* args)
{
PyObject* callable_object;
if (!PyArg_ParseTuple(args, "O", &callable_object))
return NULL;
if (!PyCallable_Check(callable_object))
{
PyErr_SetString(PyExc_TypeError, "The object is not a callable function.");
return NULL;
}
PyObject* function_pointer = PyCapsule_New(callable_object, "my_function_capsule", NULL);
if (function_pointer == NULL) return NULL;
MyFunctionType my_function = (MyFunctionType) PyCapsule_GetPointer(function_pointer, "my_function_capsule");
if (my_function == NULL) return NULL;
my_function(); // Or (*my_function)() Both same result.
// PyCapsule_Free(function_pointer);
Py_RETURN_NONE;
}
Doing this causes a seg fault on my_function() call. How can I do this?
If you're just trying to pass a Python function to a C extension, pass it directly (don't use ctypes) and use PyObject_Call to call it:
example.pass_func(tester_print)
and
PyObject_CallNoArgs(callable_object);
If you need a real C function pointer for whatever reason, the usual approach is to write a C wrapper that takes the callable as an argument:
void callable_wrapper(PyObject *func) {
PyObject_CallNoArgs(func);
// plus whatever other code you need (e.g. reference counting, return value handling)
}
Most reasonable C APIs that take a callback function also provide a way to add an arbitrary argument to the callable ("user data"); for example, with pthreads:
result = pthread_create(&tid, &attr, callable_wrapper, callable_object);
Make sure to handle reference counting correctly: increment the reference on your callable object before passing it to the C API, and decrement the reference when it is no longer needed (e.g. if the callback is only called once, the callable_wrapper could DECREF before returning).
When using threads, you additionally need to ensure that you hold the GIL when calling any Python code; see https://docs.python.org/3/c-api/init.html#non-python-created-threads for more details and a code sample.
What your current code is doing is receiving a pointer to a ctypes CFUNCTYPE object as callable_object, placing that pointer in a capsule, taking it back out again, and calling it as if it was a C function pointer. This doesn't work, since it effectively attempts to call the CFUNCTYPE object as if it were a C function (the capsule stuff winds up being useless). When you're using the Python C API, there's almost never any need for ctypes in Python, because the C API can directly interact with Python objects.
So I have this c extension function which loads a python module and uses a list of c++ strings to get a specific global attribute from that module:
PyObject* get_global_constant(const char* module_name, std::vector<std::string> constant_names) {
/* Gets a global variable from a Python module */
PyObject *temp_module = PyImport_ImportModule(module_name);
PyObject *global_var = PyImport_AddModule(module_name);
for (std::string constant : constant_names) {
global_var = PyObject_GetAttrString(global_var, constant.c_str());
}
Py_DECREF(temp_module);
return global_var;
}
Does this leak?
Every call to PyObject_GetAttrString leaks a reference (excluding the final call, for which you return the reference). It returns a new reference so you need to call Py_DECREF on that reference.
You should also be error-checking your return values (most Python C API functions return NULL if they raise an exception). If you're getting a segmentation fault it's most likely because you're not error checking.
PyImport_AddModule seems pointless given that you just got a reference to the module when you imported it.
I would like to put a malloc a function in C. I would then like to call this function from Python 3.10 via ctypes.DLL. I then would like to free it.
However, I get a segmentation fault. Here's my very simple C code:
#include <stdlib.h>
struct QueueItem {
void *value;
struct QueueItem *next;
};
struct Queue {
struct QueueItem* head;
struct QueueItem* tail;
};
struct Queue * new_queue(void * value) {
struct Queue* queue = malloc(sizeof(struct Queue));
struct Queue queue_ = { value, NULL };
return queue;
}
void delete_queue(struct Queue* queue) {
free(queue);
};
I'll compile this with gcc -fPIC -shared src/queue.c -o queue.so, and thenn on the python side:
import ctypes
queue = ctypes.CDLL("./queue.so")
q = ctypes.POINTER(queue.new_queue(1))
print(q)
print(type(q))
queue.delete_queue(q)
But running this will yield:
-1529189344
<class 'int'>
Segmentation fault
The question is, how do I malloc in C, pass the pointer through python, and then free it again in C?.
Primary Resources Consulted:
Passing pointer to DLL via Ctypes in Python
Python Ctypes passing pointer to structure containing void pointer array
https://docs.python.org/3/library/ctypes.html
Messing around with ctypes.POINTER but it complains about types, and I'm not an expert in this area.
If you don't define the restype and argtypes for a function, the restype is assumed to be a C int (c_int), and the argument types are guessed at based on what you pass. The problem here is that the implicit restype of C int is (on a 64 bit system) half the width of a pointer, so the value returned by new_queue is only half of a pointer (which is completely useless).
For safety, and error-checking, you should define both before calling a function, especially the restype which can't be inferred.
So for your code, you might do:
import ctypes
queue = ctypes.CDLL("./queue.so")
queue.new_queue.argtypes = (c_void_p,)
queue.new_queue.restype = c_void_p
queue.delete_queue.argtypes = (c_void_p,)
queue.delete_queue.restype = None
q = queue.new_queue(1)
print(q)
print(type(q))
queue.delete_queue(q)
Note that passing along 1 as the argument to new_queue is almost certainly incorrect, but I suspect it will work here since none of the code will actually try to use it.
In cpp code that I can't modify, we have custom class and custom pointers.
This class reads an object and depending on kwargs might also write it.
I need to make pybind11 bindings for it.
Depending on the arguments passed, we get const or non-const pointer to the class.
Pybind complaints about inconsistent types for lambda return type.
It all goes well for making init() function that works only for const or non-const objects. Unfortunately, need to support both cases.
What would be the best way to make python binding to cpp code in this case?
Error
In lambda function:
error: inconsistent types 'std::shared_ptr<myClass>' and 'std::shared_ptr<const myClass>'
deduced for lambda return type
return *const_p;
How could the code below be modified to support both cases?
Using Cpp 11 compiler.
// those defined elsewhere
typedef std::shared_ptr< myClass > my_ptr;
typedef std::shared_ptr< const myClass > const_my_ptr;
//init function for pybind11
.def(py::init([](py::kwargs kwargs) {
bool object_writable = py::bool_(kwargs["rw"]);
int cache = py::bool_(kwargs["cache"]);
std::string path = py::str(kwargs["path"]);
if (object_writable){
//returns non const
my_ptr p = myClass::read_write(path)
return *p;
}
else{
//returns const
const_my_ptr const_p = myClass::read(path, cache)
return *const_p;
}
}))
I'm new to the Python C-API and browsing through some source code to pick parts of it up.
Here is a minimal version of a function that I found, in the C source of a package that contains extension modules:
#define PY_SSIZE_T_CLEAN
#include <Python.h>
static PyObject *
modulename_myfunc(PyObject *self, PyObject *args) {
// Call PyArg_ParseTuple, etc ...
// Dummy values; in the real function they are calculated
int is_found = 1;
Py_ssize_t n_bytes_found = 1024;
PyObject *result;
result = Py_BuildValue("(Oi)",
is_found ? Py_True : Py_False, // Py_INCREF?
n_bytes_found);
return result;
}
Does this introduce a small memory leak by failing to use Py_INCREF on either Py_True or Py_False? The C-API docs for Boolean object seem pretty explicit about always needing to incref/decref Py_True and Py_False.
If a Py_INCREF does need to be introduced, how can it most properly be used here, assuming that Py_RETURN_TRUE/Py_RETURN_FALSE aren't really applicable because a tuple is being returned?
The reason a Py_INCREF is not used here is because Py_BuildValue, when being passed an object with "O" will increment the reference count for you:
O (object) [PyObject *]
Pass a Python object untouched (except for its reference count, which is incremented by one). If the object passed in is a NULL pointer, it is assumed that this was caused because the call producing the argument found an error and set an exception. Therefore, Py_BuildValue() will return NULL but won’t raise an exception. If no exception has been raised yet, SystemError is set.
You'll see a similar usage here in CPython itself for example.