I have a bunch of cdef functions in Cython, that are called by a def function in a pyx file, e.g.:
cdef inline void myfunc_c(...):
(...)
return
def wrapper(...):
myfunc_c(...)
return
This works well. But to simplify not having to have a python wrapper for each cdef function, I was trying to index the cdef functions by name, either by assigning them to a dictionary or something like:
def wrapper(operation):
if operation == 'my_func':
func = myfunc_c
func(...)
return
But this doesn't work. Cython complains that it doesn't know the type of myfunc_c.
Is there any way to index or call the cpdef functions by name (e.g. use a string)? I also tried things like locals()['myfunc_c'], but that doesn't work either.
For a general cdef functions this is impossible - they only define a C interface but not a Python interface so there's no introspection available.
For a cdef function declared with api (e.g. cdef api funcname()) it is actually possible. There's an undocumented dictionary __pyx_capi__. This defines a dictionary (indexed by name) of PyCapsules containing function pointers. You'd then do
capsule_name = PyCapsule_GetName(obj)
func = PyCapsule_GetPointer(obj, capsule_name)
(where PyCapsule_* are functions cimported from the Python C API). func is a void* that you can cast into a function pointer of an appropriate type. Getting the type right is important, and up to you!
Although undocumented, the __pyx_capi__ interface is relatively stable and used by Scipy for its LowLevelCallable feature, for example.
cpdef functions define both a Python and a C interface. Within a function they will be available in globals() rather than locals() (since locals() only gives the variables defined in that function.)
I don't actually think you want to do this though. I think you just want to use cpdef instead of cdef since this automatically generates a Python wrapper for the function.
Related
I'd like to know the difference between def, cdef and cpdef when I declare a function.
The difference between def and the others it's more or less clear.
And I've also seen that sometimes it's added the return type in the declaration (cdef void/double/int... name) and sometimes not.
I'd also like to know how to declare a string variable in cython, as I didn't know it, I declared it as object.
The key difference is in where the function can be called from: def functions can be called from Python and Cython while cdef function can be called from Cython and C.
Both types of functions can be declared with any mixture of typed and untyped arguments, and in both cases the internals are compiled to C by Cython (and the compiled code should be very, very similar):
# A Cython class for illustrative purposes
cdef class C:
pass
def f(int arg1, C arg2, arg3):
# takes an integer, a "C" and an untyped generic python object
pass
cdef g(int arg1, C arg2, arg3):
pass
In the example above, f will be visible to Python (once it has imported the Cython module) and g will not be and cannot be called from Python. g will translate into a C signature of:
PyObject* some_name(int, struct __pyx_obj_11name_of_module_C *, PyObject*)
(where struct __pyx_obj_11name_of_module_C * is just the C struct that our class C is translated into). This allows it to be passed to C functions as a function pointer for example. In contrast f cannot (easily) be called from C.
Restrictions on cdef functions:
cdef functions cannot be defined inside other functions - this is because there is no way of storing any captured variables in a C function pointer. E.g. the following code is illegal:
# WON'T WORK!
def g(a):
cdef (int b):
return a+b
cdef functions cannot take *args and **kwargs type arguments. This is because they cannot easily be translated into a C signature.
Advantages of cdef functions
cdef functions can take any type of argument, including those that have no Python equivalent (for example pointers). def functions cannot have these, since they must be callable from Python.
cdef functions can also specify a return type (if it is not specified then they return a Python object, PyObject* in C). def functions always return a Python object, so cannot specify a return type:
cdef int h(int* a):
# specify a return type and take a non-Python compatible argument
return a[0]
cdef functions are quicker to call than def functions because they translate to a simple C function call.
cpdef functions
cpdef functions cause Cython to generate a cdef function (that allows a quick function call from Cython) and a def function (which allows you to call it from Python). Interally the def function just calls the cdef function. In terms of the types of arguments allowed, cpdef functions have all the restrictions of both cdef and def functions.
When to use a cdef function
Once the function has been called there is no difference in the speed that the code inside a cdef and a def function runs at. Therefore, only use a cdef function if:
You need to pass non-Python types in or out, or
You need to pass it to C as a function pointer, or
You are calling it often (so the sped-up function call is important) and you don't need to call it from Python.
Use a cpdef function when you are calling it often (so the sped-up function call is important) but you do need to call it from Python.
def declares a function in Python. Since Cython is based on C runtime, it allows you to use cdef and cpdef.
cdef declares function in the layer of C language. As you know (or not?) in C language you have to define type of returning value for each function. Sometimes function returns with void, and this is equal for just return in Python.
Python is an object-oriented language. So you also can define class method in layer of C++ language, and override this methods in subclasses:
cdef class A:
cdef foo(self):
print "A"
cdef class B(A)
cdef foo(self, x=None)
print "B", x
cdef class C(B):
cpdef foo(self, x=True, int k=3)
print "C", x, k
Summary, why do we need to use def, cdef and cpdef? Because if you use Cython, your Python code will be converted into C code before compile. So with this things you can control the resulting C-code listing.
For more information I suggest you to read the official documentation: http://docs.cython.org/src/reference/language_basics.html
I use Cython to wrap C++ code. The code contains a function defined as:
std::vector<ClassOut> analyze(std::vector<ClassIn> inputVec);
ClassIn and ClassOut are extension types. From Python I'd like to be able to call this function with a list or a numpy array (whatever is possible and most sensible). I also want to be able to access and modify the extenion types, so something like this:
run.py
from cythonCode.classIn import PyClassIn
from cythonCode.classOut import PyClassOut
from cythonCode.analyze import PyAnalyze
classIn_list = []
classIn_list.append(PyClassIn())
classIn_list.append(PyClassIn())
classOut_list = PyAnalyze(classIn_list)
print(classOut_list)
The wrappers PyClassIn and PyClassOut work fine. The problem is simply the wrapping of the analyze function from the beginning. My version of the wrapper PyAnalyze can be found below:
analyze.pxd
from libcpp.vector cimport vector
from classOut cimport ClassOut
from classIn cimport ClassIn, PyClassIn
cdef extern from "../cppCode/analyze.h":
vector[ClassOut] analyze(vector[ClassIn])
analyze.pyx
def PyAnalyze(vector<PyClassIn> inputVec)
return analyze(inputVec)
There are for sure mistakes in the analyze.pyx. I am getting the error:
Python object type 'PyClassIn' cannot be used as a template argument
The return statement has to be incorrect as well. Cython complains with:
Cannot convert 'vector[ClassOut]' to Python object
I have this code as a minimal example at https://github.com/zyzzler/cython-vector-minimal-example.git
EDIT: Thanks to your input I am now at the point where the return type of the definition can be wrapped but the argument not yet. The link in the first comment provided great information about getting the return type correct. So assuming I'd want to wrap a function defined as:
std::vector<ClassOut> analyze(std::vector<float> inputVec);
everything works fine! However, I have to deal with the Extension Type ClassIn in place of float. So below is the code I have now:
analyze.pyx
def PyAnalyze(classesIn):
cdef vector[ClassOut] classesOut = analyze(classesIn)
retval = PyClassOutVector()
retval.move_from(move(classesOut))
return retval
The above code throws the error:
Cannot convert Python object to 'vector[ClassIn]'
The reason for this error is clear. "classesIn" is a Python list of PyClassIn objects but analyze(...) takes a vector[ClassIn] as input. So the question is how to convert from the Python list to the std::vector and/or from PyClassIn to ClassIn? I tried to use the rvalue reference and move constructor formalism as well but it didn't work. I also tried to do it via a function like this:
cdef vector[ClassIn] list_to_vec(classInList):
cdef vector[ClassIn] classInVec
for classIn in classInList:
classInVec.push_back(<ClassIn>classIn)
return classInVec
The problem here is the <ClassIn>classIn statement. It says:
no matching function for call to 'ClassIn::ClassIn(PyObject*&)'
So I am really puzzled here. How could this be solved? I adapted the code with the minimal example in the git I posted above.
EDIT2: To provide some more information for the comments below. I now have a wrapper for PyClassInVector exactly like the one for PyClassOutVector, see below:
cdef class PyClassInVector:
cdef vector[ClassIn] vec
cdef move_from(self, vector[ClassIn]&& move_this):
self.vec = move(move_this)
def __getitem__(self, idx):
return PyClassIn2(self, idx)
def __len__(self):
return self.vec.size()
cdef class PyClassIn2:
cdef ClassIn* thisptr
cdef PyClassInVector vector
def __cinit__(self, PyClassInVector vec, idx):
self.vector = vec
self.thisptr = &vec.vec[idx]
In analyze.pxd I also added:
cdef extern from "<utility>":
vector[ClassIn]&& move(vector[ClassIn]&&)
Now based on the comments, in the PyAnalyzefunction I'd do:
def PyAnalyze(classesIn):
# classesIn is a list of PyClassIn objects and needs to be converted to a PyClassInVector
classInVec = PyClassInVector()
cdef vector[ClassOut] classesOut = analyze(classInVec.vec)
retval = PyClassOutVector()
retval.move_from(move(classesOut))
return retval
But as the comment in the code says, how can I get the list of PyClassIn objects (classesIn) into the PyClassInVector (classInVec)?
EDIT3: Imagine PyClassOut is decorated with an attribute that can be set via the constructor:
cdef class PyClassOut()
def __cinit__(self, number):
self.classOut_c = ClassOut(number)
#property
def number(self):
return self.classOut_c.number
In run.py I'm doing something like this:
from cythonCode.classIn import PyClassIn
from cythonCode.classOut import PyClassOut
from cythonCode.analyze import PyAnalyze
classIn_list = []
classIn_list.append(PyClassIn(1))
classIn_list.append(PyClassIn(2))
classOut_list = PyAnalyze(classIn_list)
print(classOut_list[0].number)
print(classOut_list[1].number)
classOut_list is essentially the retvalue from the PyAnalyze function. The retvalue is a PyClassOutVector object. So classOut_list[0] gives me the PyClassOut2 object at the index 0. But here I don't have access to the attribute number. Also what I notice is that the address of classOut_list[1] is the same as the one of classOut_list[0]. I don't understand this. I am not entirely sure what 'move' does. Also I actually want to have a python list again as the retvalue, ideally with PyClassOut objects instead of the PyClassOut2 objects. Does that make sense? And is it feasible?
In the comments I tried to recommend a solution involving wrapping C++ vectors. I prefer this approach because it avoids copying memory multiple times, but I think it's causing more confusion and you'd rather just use Python lists. Sorry.
To use Python lists you just have to copy the input and output within PyAnalyze. You have to do it manually - no automatic conversions exists. You also have to be aware of the difference between your wrapped classes and the underlying C++ classes. You can only send the C++ classes to C++, not the wrapped ones.
Dealing with the input is easy:
def PyAnalyze(classesIn):
# classesIn is a list of PyClassIn objects and needs to be converted to a PyClassInVector
cdef vector[ClassIn] vecIn
cdef vector[ClassOut] vecOut
cdef PyClassIn a
for a in classesIn:
# need to type a to access its C attributes
# Cython should check that a is of the correct type
vecIn.push_back(a.classIn_c)
vecOut = analyze(vecIn)
Returning the data back to Cython wrapped as PyClassOut is a little more difficult since you can't send a C++ type to a Cython constructor (all arguments to constructors must be Python types). Just construct an empty PyClassOut then copy the new data into it. Again, work through your vector element by element
def PyAnalyze(classesIn):
cdef PyClassOut out_val
# ... use code above ...
out_list = []
for i in range(vecOut.size()):
out_val = PyClassOut()
out_val.classOut_c = vecOut[i]
out_list.append(out_val)
return out_list
We would need to create a PyCapsule from a method of a class in Cython. We managed to write a code which compiles and even runs without error but the results are wrong.
A simple example is here: https://github.com/paugier/cython_capi/tree/master/using_cpython_pycapsule_class
The capsules are executed by Pythran (one needs to use the version on github https://github.com/serge-sans-paille/pythran).
The .pyx file:
from cpython.pycapsule cimport PyCapsule_New
cdef int twice_func(int c):
return 2*c
cdef class Twice:
cdef public dict __pyx_capi__
def __init__(self):
self.__pyx_capi__ = self.get_capi()
cpdef get_capi(self):
return {
'twice_func': PyCapsule_New(
<void *>twice_func, 'int (int)', NULL),
'twice_cpdef': PyCapsule_New(
<void *>self.twice_cpdef, 'int (int)', NULL),
'twice_cdef': PyCapsule_New(
<void *>self.twice_cdef, 'int (int)', NULL),
'twice_static': PyCapsule_New(
<void *>self.twice_static, 'int (int)', NULL)}
cpdef int twice_cpdef(self, int c):
return 2*c
cdef int twice_cdef(self, int c):
return 2*c
#staticmethod
cdef int twice_static(int c):
return 2*c
The file compiled by pythran (call_capsule_pythran.py).
# pythran export call_capsule(int(int), int)
def call_capsule(capsule, n):
r = capsule(n)
return r
Once again it is a new feature of Pythran so one needs the version on github...
And the test file:
try:
import faulthandler
faulthandler.enable()
except ImportError:
pass
import unittest
from twice import Twice
from call_capsule_pythran import call_capsule
class TestAll(unittest.TestCase):
def setUp(self):
self.obj = Twice()
self.capi = self.obj.__pyx_capi__
def test_pythran(self):
value = 41
print('\n')
for name, capsule in self.capi.items():
print('capsule', name)
result = call_capsule(capsule, value)
if name.startswith('twice'):
if result != 2*value:
how = 'wrong'
else:
how = 'good'
print(how, f'result ({result})\n')
if __name__ == '__main__':
unittest.main()
It is buggy and gives:
capsule twice_func
good result (82)
capsule twice_cpdef
wrong result (4006664390)
capsule twice_cdef
wrong result (4006664390)
capsule twice_static
good result (82)
It shows that it works fine for the standard function and for the static function but that there is a problem for the methods.
Note that the fact that it works for two capsules seems to indicate that the problem does not come from Pythran.
Edit
After DavidW's comments, I understand that we would have to create at run time (for example in get_capi) a C function with the signature int(int) from the bound method twice_cdef whose signature is actually int(Twice, int).
I don't know if this is really impossible to do with Cython...
To follow up/expand on my comments:
The basic issue is that the Pythran is expecting a C function pointer with the signature int f(int) to be contained within the PyCapsule. However, the signature of your methods is int(PyObject* self, int c). The 2 gets passed as self (not causing disaster since it isn't actually used...) and some arbitrary bit of memory is used in place of the int c. Unfortunately it isn't possible to use pure C code to create a C function pointer with "bound arguments" so Cython can't (and realistically won't be able to) do it.
Modification 1 is to get better compile-time type checking of what you're passing to your PyCapsules by creating a function that accepts the correct types and casting in there, rather than just casting to <void*> blindly. This doesn't solve your problem but warns you at compile-time when it isn't going to work:
ctypedef int(*f_ptr_type)(int)
cdef make_PyCapsule(f_ptr_type f, string):
return PyCapsule_New(
<void *>f, string, NULL)
# then in get_capi:
'twice_func': make_PyCapsule(twice_func, b'int (int)'), # etc
It is actually possible to create C function from arbitrary Python callables using ctypes (or cffi) - see Using function pointers to methods of classes without the gil (bottom of answer). This adds an extra layer of Python calls so isn't terribly quick, and the code is a bit messy. ctypes achieves this by using runtime code generation (which isn't that portable or something you can do in pure C) to build a function on the fly and then create a pointer to that.
Although you claim in the comments that you don't think you can use the Python interpreter, I don't think this is true - Pythran generates Python extension modules (so is pretty bound to the Python interpreter) and it seems to work in your test case shown here:
_func_cache = []
cdef f_ptr_type py_to_fptr(f):
import ctypes
functype = ctypes.CFUNCTYPE(ctypes.c_int,ctypes.c_int)
ctypes_f = functype(f)
_func_cache.append(ctypes_f) # ensure references are kept
return (<f_ptr_type*><size_t>ctypes.addressof(ctypes_f))[0]
# then in make_capi:
'twice_cpdef': make_PyCapsule(py_to_fptr(self.twice_cpdef), b'int (int)')
Unfortunately it only works for cpdef and not cdef functions since it does rely on having a Python callable. cdef functions can be made to work with a lambda (provided you change get_capi to def instead of cpdef):
'twice_cdef': make_PyCapsule(py_to_fptr(lambda x: self.twice_cdef(x)), b'int (int)'),
It's all a little messy but can be made to work.
C++ Model
Say I have the following C++ data structures I wish to expose to Python.
#include <memory>
#include <vector>
struct mystruct
{
int a, b, c, d, e, f, g, h, i, j, k, l, m;
};
typedef std::vector<std::shared_ptr<mystruct>> mystruct_list;
Boost Python
I can wrap these fairly effectively using boost::python with the following code, easily allowing me to use the existing mystruct (copying the shared_ptr) rather than recreating an existing object.
#include "mystruct.h"
#include <boost/python.hpp>
using namespace boost::python;
BOOST_PYTHON_MODULE(example)
{
class_<mystruct, std::shared_ptr<mystruct>>("MyStruct", init<>())
.def_readwrite("a", &mystruct::a);
// add the rest of the member variables
class_<mystruct_list>("MyStructList", init<>())
.def("at", &mystruct_list::at, return_value_policy<copy_const_reference>());
// add the rest of the member functions
}
Cython
In Cython, I have no idea how to extract an item from mystruct_list, without copying the underlying data. I have no idea how I could initialize MyStruct from the existing shared_ptr<mystruct>, without copying all the data over in one of various forms.
from libcpp.memory cimport shared_ptr
from cython.operator cimport dereference
cdef extern from "mystruct.h" nogil:
cdef cppclass mystruct:
int a, b, c, d, e, f, g, h, i, j, k, l, m
ctypedef vector[v] mystruct_list
cdef class MyStruct:
cdef shared_ptr[mystruct] ptr
def __cinit__(MyStruct self):
self.ptr.reset(new mystruct)
property a:
def __get__(MyStruct self):
return dereference(self.ptr).a
def __set__(MyStruct self, int value):
dereference(self.ptr).a = value
cdef class MyStructList:
cdef mystruct_list c
cdef mystruct_list.iterator it
def __cinit__(MyStructList self):
pass
def __getitem__(MyStructList self, int index):
# How do return MyStruct without copying the underlying `mystruct`
pass
I see many possible workarounds, and none of them are very satisfactory:
I could initialize an empty MyStruct, and in Cython assign over the shared_ptr. However, this would result in wasting an initalized struct for absolutely no reason.
MyStruct value
value.ptr = self.c.at(index)
return value
I also could copy the data from the existing mystruct to the new mystruct. However, this suffers from similar bloat.
MyStruct value
dereference(value.ptr).a = dereference(self.c.at(index)).a
return value
I could also expose a init=True flag for each __cinit__ method, which would prevent reconstructing the object internally if the C-object exists already (when init is False). However, this could cause catastrophic issues, since it would be exposed to the Python API and would allow dereferencing a null or uninitialized pointer.
def __cinit__(MyStruct self, bint init=True):
if init:
self.ptr.reset(new mystruct)
I could also overload __init__ with the Python-exposed constructor (which would reset self.ptr), but this would have risky memory safety if __new__ was used from the Python layer.
Bottom-Line
I would love to use Cython, for compilation speed, syntactical sugar, and numerous other reasons, as opposed to the fairly clunky boost::python. I'm looking at pybind11 right now, and it may solve the compilation speed issues, but I would still prefer to use Cython.
Is there any way I can do such a simple task idiomatically in Cython? Thanks.
The way this works in Cython is by having a factory class to create Python objects out of the shared pointer. This gives you access to the underlying C/C++ structure without copying.
Example Cython code:
<..>
cdef class MyStruct:
cdef shared_ptr[mystruct] ptr
def __cinit__(self):
# Do not create new ref here, we will
# pass one in from Cython code
self.ptr = NULL
def __dealloc__(self):
# Do de-allocation here, important!
if self.ptr is not NULL:
<de-alloc>
<rest per MyStruct code above>
cdef object PyStruct(shared_ptr[mystruct] MyStruct_ptr):
"""Python object factory class taking Cpp mystruct pointer
as argument
"""
# Create new MyStruct object. This does not create
# new structure but does allocate a null pointer
cdef MyStruct _mystruct = MyStruct()
# Set pointer of cdef class to existing struct ptr
_mystruct.ptr = MyStruct_ptr
# Return the wrapped MyStruct object with MyStruct_ptr
return _mystruct
def make_structure():
"""Function to create new Cpp mystruct and return
python object representation of it
"""
cdef MyStruct mypystruct = PyStruct(new mystruct)
return mypystruct
Note the type for the argument of PyStruct is a pointer to the Cpp struct.
mypystruct then is a python object of class MyStruct, as returned by the factory class, which refers to the
Cpp mystruct without copying. mypystruct can be safely returned in def cython functions and used in python space, per make_structure code.
To return a Python object of an existing Cpp mystruct pointer just wrap it with PyStruct like
return PyStruct(my_cpp_struct_ptr)
anywhere in your Cython code.
Obviously only def functions are visible there so the Cpp function calls would need to be wrapped as well inside MyStruct if they are to be used in Python space, at least if you want the Cpp function calls inside the Cython class to let go of the GiL (probably worth doing for obvious reasons).
For a real-world example see this Cython extension code and the underlying C code bindings in Cython. Also see this code for Python function wrapping of C function calls that let go of GIL. Not Cpp but same applies.
See also official Cython documentation on when a factory class/function is needed (Note that all constructor arguments will be passed as Python objects). For built in types, Cython does this conversion for you but for custom structures or objects a factory class/function is needed.
The Cpp structure initialisation could be handled in __new__ of PyStruct if needed, per suggestion above, if you want the factory class to actually create the C++ structure for you (depends on the use case really).
The benefit of a factory class with pointer arguments is it allows you to use existing pointers of C/C++ structures and wrap them in a Python extension class, rather than always having to create new ones. It would be perfectly safe to, for example, have multiple Python objects referring to the same underlying C struct. Python's ref counting ensures they won't be de-allocated prematurely. You should still check for null when deallocating though as the shared pointer could already had been de-allocated explicitly (eg, by del).
Note that there is, however, some overhead in creating new python objects even if they do point to the same C++ structure. Not a lot, but still.
IMO this auto de-allocation and ref counting of C/C++ pointers is one of the greatest features of Python's C extension API. As all that acts on Python objects (alone), the C/C++ structures need to be wrapped in a compatible Python object class definition.
Note - My experience is mostly in C, the above may need adjusting as I'm more familiar with regular C pointers than C++'s shared pointers.
I have existing C++ code that defines some classes I need to use, but I need to be able to send those classes to Python code. Specifically, I need to create class instances in C++, create Python objects to serve as wrappers for these C++ objects, then pass these Python objects to Python code for processing. This is just one piece of a larger C++ program, so it needs to be done ultimately in C++ using the C/Python API.
To make my life easier, I have used Cython to define extension classes (cdef classes) that serve as the Python wrappers for my C++ objects. I am using the typical format where the cdef class contains a pointer to the C++ class, which is then initialized when the cdef class instance is created. Since I also want to be able to replace the pointer if I have an existing C++ object to wrap, I have added methods to my cdef classes to accept() the C++ object and take its pointer. My other cdef classes successfully use the accept() method in Cython, for example when one object owns another.
Here is a sample of my Cython code:
MyCPlus.pxd
cdef extern from "MyCPlus.h" namespace "mynamespace":
cdef cppclass MyCPlus_Class:
MyCPlus_Class() except +
PyModule.pyx
cimport MyCPlus
from libcpp cimport bool
cdef class Py_Class [object Py_Class, type PyType_Class]:
cdef MyCPlus.MyCPlus_Class* thisptr
cdef bool owned
cdef void accept(self, MyCPlus.MyCPlus_Class &indata):
if self.owned:
del self.thisptr
self.thisptr = &indata
self.owned = False
def __cinit__(self):
self.thisptr = new MyCPlus.MyCPlus_Class()
self.owned = True
def __dealloc__(self):
if self.owned:
del self.thisptr
The problem comes when I try to access the accept() method from C++. I tried using the public and api keywords on my cdef class and on the accept() method, but I cannot figure out how to expose this method in the C struct in Cython's auto-generated .h file. No matter what I try, the C struct looks like this:
PyModule.h (auto-generated by Cython)
struct Py_Class {
PyObject_HEAD
struct __pyx_vtabstruct_11PyModule_Py_Class *__pyx_vtab;
mynamespace::MyCPlus_Class *thisptr;
bool owned;
};
I also tried typing the self input as a Py_Class, and I even tried forward-declaring Py_Class with the public and api keywords. I also experimented with making accept() a static method. Nothing I've tried works to expose the accept() method so that I can use it from C++. I did try accessing it through __pyx_vtab, but I got a compiler error, "invalid use of incomplete type". I have searched quite a bit, but haven't seen a solution to this. Can anyone help me? Please and thank you!
As you pointed in your comment, it does seem that the __pyx_vtab member is for Cython use only, since it doesn't even define the struct type for it in the exported header(s).
Adding to your response, one approach could also be:
cdef api class Py_Class [object Py_Class, type Py_ClassType]:
...
cdef void accept(self, MyCPlus.MyCPlus_Class &indata):
... # do stuff here
...
cdef api void (*Py_Class_accept)(Py_Class self, MyCPlus.MyCPlus_Class &indata)
Py_Class_accept = &Py_Class.accept
Basically, we define a function pointer and set it to the extension's method we want to expose. This is not that much different to your response's cdef'd function; the main difference would be that we can define our methods as usual in the class definition without having to duplicate functionality or method/function calls to another function to expose it. One caveat is that we would've to define our function pointer's signature almost verbatim to the method's one in addition to the self's extension type (in this case) and etc; then again this also applies for regular functions.
Do note that I tried this up on a C-level Cython .pyx file, I haven't and do not intent to test it on a CPP implementation file. But hopefully this might work just as fine, I guess.
This is not really a solution, but I came up with a workaround for my problem. I am still hoping for a solution that allows me to tell Cython to expose the accept() method to C++.
My workaround is that I wrote a separate function for my Python class (not a method). I then gave the api keyword both to my Python class and to the new function:
cdef api class Py_Class [object Py_Class, type PyType_Class]:
(etc.)
cdef api Py_Class wrap_MyCPlusClass(MyCPlus.MyCPlus_Class &indata):
wrapper = Py_Class()
del wrapper.thisptr
wrapper.thisptr = &indata
wrapper.owned = False
return wrapper
This gets a little unwieldy with the number of different classes I need to wrap, but at least Cython puts the function in the API where it is easy to use:
struct Py_Class* wrap_MyCPlusClass(mynamespace::MyCPlusClass &);
You probably want to use cpdef instead of cdef when declaring accept. See the docs:
Callable from Python and C
* Are declared with the cpdef statement.
* Can be called from anywhere, because it uses a little Cython magic.
* Uses the faster C calling conventions when being called from other Cython code.
Try that!