I'm trying to use pybind11 to bind a struct that looks like this
struct myStruct {
int na;
int nb;
double* a;
double* b;
}
I'm not sure the right way to go about it. The examples in the pybind11 documentation show how to attach buffer protocol semantics to an object, but not to a class member.
I don't have the luxury of changing the interface of myStruct to contain std::vectors either which would allow me to use the usual .def_readwrite().
I've tried doing something like this
py::class_<myStruct>(m, "myStruct")
.def_property("a",
[](myStruct &s) {return py::array<double>({s.na}, {sizeof(double), s.a};)},
[](myStruct &s, py::array_t<double> val) {std::copy((double*) val.request().ptr, (double*) val.request().ptr + s.na, s.a);)}
)
Which compiles, but in python I don't see changes persist in the underlying data
print(my_struct.a[0]) # prints 0.0
my_struct.a[0] = 123.0
print(my_struct.a[0]) # still prints 0.0
Hey most likely not the most elegant answer, but maybe it gives you a starting point and temporary solution. I think what you need to do is use shared pointers.
Under https://github.com/pybind/pybind11/issues/1150 someone asked something similar but I was not able to adapt it to your example and only got the same result to yours with no changes to the data.
What worked for me in your specific example was using the shared_ptr and defining setter and getter functions for the pointers with a simple def_property for the pybin11 class.
class class_DATA{
public:
int na;
std::shared_ptr<double> a;
void set_a(double a){*class_DATA::a = a; };
double get_a(void){return *class_DATA::a; };
};
PYBIND11_MODULE(TEST,m){
m.doc() = "pybind11 example plugin";
//the costum class
py::class_<class_DATA>(m, "class_DATA", py::dynamic_attr())
.def(py::init<>()) //needed to define constructor
.def_readwrite("na", &class_DATA::na)
.def_property("a", &class_DATA::get_a, &class_DATA::set_a, py::return_value_policy::copy);
}
Related
I have include a minimal working example below - it can be compiled using the typical pybind11 instructions (I use cmake).
I have an abstract base class, Abstract, which is pure virtual. I can easily wrap this in pybind11 using a "trampoline class" (this is well documented by pybind11).
Further, I have a concrete implementation of Abstract, ToBeWrapped, that is also wrapped using pybind11.
My issue is that I have some client code which accepts an arbitrary PyObject* (or, in the case of this example, pybind11's wrapper py::object) and expects to cast this to Abstract*.
However, as illustrated in my example, I am unable to cast the py::object to Abstract*.
I have no problem casting to ToBeWrapped* and then storing that as an Abstract*', however this would require my client code to know ahead of time what kind ofAbstract*` the python interpreter is sending, which defeats the purpose of the abstract base class.
TL;DR
Is it possible to modify this code such that the client accessMethod is able to arbitrarily handle an Abstract* passed from the python interpreter?
#include <pybind11/pybind11.h>
#include <iostream>
namespace py = pybind11;
// abstract base class - cannot be instantiated on its own
class Abstract
{
public:
virtual ~Abstract() = 0;
virtual std::string print() const = 0;
};
Abstract::~Abstract(){}
// concrete implementation of Abstract
class ToBeWrapped : public Abstract
{
public:
ToBeWrapped(const std::string& msg = "heh?")
: myMessage(msg){};
std::string print() const override
{
return myMessage;
}
private:
const std::string myMessage;
};
// We need a trampoline class in order to wrap this with pybind11
class AbstractPy : public Abstract
{
public:
using Abstract::Abstract;
std::string print() const override
{
PYBIND11_OVERLOAD_PURE(
std::string, // return type
Abstract, // parent class
print, // name of the function
// arguments (if any)
);
}
};
// I have client code that accepts a raw PyObject* - this client code base implements its
// own python interpreter, and calls this "accessMethod" expecting to convert the python
// object to its c++ type.
//
// Rather than mocking up the raw PyObject* method (which would be trivial) I elected to
// keep this minimal example 100% pybind11
void accessMethod(py::object obj)
{
// runtime error: py::cast_error
//Abstract* casted = obj.cast<Abstract*>();
// this works
Abstract* casted = obj.cast<ToBeWrapped*>();
}
PYBIND11_MODULE(PyMod, m)
{
m.doc() = R"pbdoc(
This is a python module
)pbdoc";
py::class_<Abstract, AbstractPy>(m, "Abstract")
.def("print", &Abstract::print)
;
py::class_<ToBeWrapped>(m, "WrappedClass")
.def(py::init<const std::string&>())
;
m.def("access", &accessMethod, "This method will attempt to access the wrapped type");
}
You need to declare the hierarchy relationship, so this:
py::class_<ToBeWrapped>(m, "WrappedClass")
should be:
py::class_<ToBeWrapped, Abstract>(m, "WrappedClass")
Currently when working with python + pybind11 I find it frustrating to work with the typed c++ classes/structs.
I would like to change my bindings so that they generate a simple python class, with an __init__ and a simple function like shown below. Is something like the feasible?
Reasoning:
I currently have a struct that I generate via c++, but it has a lot of heavy std::vector<float>s that I would like to pass to python, and keep as numpy arrays inside a similar interfacing python class. (bonus points if you can tell me how to move vectors to be numpy arrays quickly!)
I have already completely bound my c++ struct with pybind11, so I feel like I know what I'm doing... however I can't seem to figure out if this is possible!
So, as a learning exercise, can I make the following python class via pybind11?
>>> python
class MyStruct:
def __init__(self, A_in, descriptor_in):
self.A = A_in
self.descriptor = descriptor_in
def add_to_vec(f_in):
self.A.append(f_in)
<<< python
Edit:
I want to say I 'think' that this is doable with the python C api, but I'd like to avoid using that directly if I can. (but if you think that's the only way, please let me know :) )
Edit2: (response to #Erwan)
The only way I'm aware of to get class variables individually is this (shown below). You cannot use the pybind advertised buffer_protocol interface if you have more than one numpy array in the struct you would like to get. However this requires creating a python-interface only function .def (not ideal) that points to (what I think is a copy) of the original data (so it's probably slow, i haven't benchmarked it, but I'm not sure if this was is a hack or the correct way to get vectors into numpy arrays).
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
#include <vector>
#include <string>
struct Pet {
Pet(const std::string &name) : name(name) {
bdata.push_back(22.);
bdata.push_back(23.1);
bdata.push_back(24.);
bdata.push_back(2222.);
}
void setName(const std::string &name_) { name = name_; }
const std::string &getName() const { return name; }
std::string name;
std::vector<float> bdata;
};
namespace py = pybind11;
PYBIND11_MODULE(example, m) {
py::class_<Pet>(m, "Pet")
.def(py::init<const std::string &>())
.def("setName", &Pet::setName)
.def("getName", &Pet::getName)
.def("bdata", [](Pet &m) -> py::array {
py::buffer_info buff_info(py::buffer_info(
m.bdata.data(), /* Pointer to buffer */
sizeof(float), /* Size of one scalar */
py::format_descriptor<float>::format(), /* Python struct-style format descriptor */
m.bdata.size() /* Number of dimensions */
));
return py::array(buff_info);
});
}
I don't understand your question in whole, but I'll take this part:
bonus points if you can tell me how to move vectors to be numpy arrays quickly!
If you use the return result of bdata.data() combined with numpy.frombuffer() and bdata.size() if need be, you can get a view on the vector data, which is guaranteed to be contiguous as of C++11. (The normal numpy.array() call will not honor copy=False in this case, but frombuffer acts like a cast.) Since there is no copy, that's probably as quick as it gets.
Below is an example in cppyy (which allows for easy testing, but the use of which is otherwise immaterial to the answer of how to mix std::vector and numpy.array per se). The gravy is in the last few lines: the update to 'arr' will show up in the original vector (and v.v.) b/c frombuffer is a view, not a copy:
import cppyy
import numpy as np
# load struct definition
cppyy.cppdef("""
struct Pet {
Pet(const std::string &name) : name(name) {
bdata.push_back(22.);
bdata.push_back(23.1);
bdata.push_back(24.);
bdata.push_back(2222.);
}
void setName(const std::string &name_) { name = name_; }
const std::string &getName() const { return name; }
std::string name;
std::vector<float> bdata;
};""")
# create a pet object
p = cppyy.gbl.Pet('fido')
print(p.bdata[0]) # for reference (prints 22, per above)
# create a numpy view on the std::vector's data
# add count=p.bdata.size() if need be
arr = np.frombuffer(p.bdata.data(), dtype=np.float32)
# prove that it worked as intended
arr[0] = 14
print(p.bdata[0]) # shows update to 14
p.bdata[2] = 17.5
print(arr[2]) # shows update to 17.5
which will print:
22.0
14.0
17.5
'arr' may become invalid if the std::vector resizes. If you know the maximum size, however, and it is not too large or will be fully used for sure, you can reserve that, so the vector's internal data will not be reallocated.
Depending on how/where you store the numpy array, I also recommend tying the life time of 'p' (and hence 'p.bdata') to 'arr' eg. by keeping them both as data members in an instance of the wrapper class you're after.
If you want to do the conversion in C++ instead, use PyArray_FromBuffer from NumPy's array API.
I have an existing c++ code library that uses a struct with std::vector, which should be exposed to python.
in the header:
struct sFOO
{
unsigned int start = 3 ;
double foo = 20.0 ;
};
in the cpp:
namespace myName
{
myfoo::myfoo(){
sFOO singlefoo;
std::vector<sFOO> foos;
}
sFOO singlefoo;
std::vector<sFOO>* myfoo::get_vector(){
return &foos;
}
}
and for boost::python snippet:
using namespace boost::python;
class dummy3{};
BOOST_PYTHON_MODULE(vStr)
{
scope myName = class_<dummy3>("myName");
class_<myName::sFOO>("sFOO")
.add_property("start",&myName::sFOO::start)
.add_property("foo",&myName::sFOO::foo)
;
class_<myName::myfoo>("myfoo", no_init)
.def(init<>())
.def("checkfoo",&myName::myfoo::checkfoo)
.add_property("foos",&myName::myfoo::foos)
.add_property("singlefoo",&myName::myfoo::singlefoo)
}
(FYI, fictitious class dummy3 is used to simulate namespace, and using scope is therefore not an option.)
A compilation and importing processes are OK, and I can access singlefoo, but whenever I try to access vector foos, I encounter the error message below.
Python class registered for C++ class std::vector<myName::sFOO,
std::allocator<myName::sFOO> >
To circumvent this issue,
I've firstly tried vector_indexing_suite, but it didn't help exposing pre-defined vector of struct.
I also assumed that there should be a solution related to exposing pointer to python, so I have tried to get a pointer by following:
.def("get_vector",&myName::myfoo::get_vector)
which produces compile Error.
Since I am quite a novice to both C++ and Boost, any comments including solution, tips for search, and a suggestion to suitable reference would be greatly appreciated.
Thanks in advance!
Method .def("get_vector",&myName::myfoo::get_vector) is not working because it returns a pointer to a vector, so it's necessary to inform the policy that defines how object ownership should be managed:
class_<myName::myfoo>("myfoo", no_init)
// current code
.def("get_vector", &myfoo::get_vector, return_value_policy<reference_existing_object>())
;
In order to use vector_indexing_suite, it is necessary to implement the equal to operator to the class that it holds:
struct sFOO
{
unsigned int start = 3 ;
double foo = 20.0 ;
bool operator==(const sFOO& rhs)
{
return this == &rhs; //< implement your own rules.
}
};
then you can export the vector:
#include <boost/python/suite/indexing/vector_indexing_suite.hpp>
class_<std::vector<sFOO>>("vector_sFOO_")
.def(vector_indexing_suite<std::vector<sFOO>>())
;
I have scenario where I need pass around opaque void* pointers through my C++ <-> Python interface implemented based on SWIG (ver 1.3). I am able to return and accept void* in regular functions like this:
void* foo();
void goo( void* );
The problems begins when I try to do the same using generic functions (and this is what I actually need to do). I was able to deal with "foo" scenario above with the function doing essentially something like this (stolen from code generated by SWIG itself for function foo above):
PyObject*
foo(...)
{
if( need to return void* )
return SWIG_NewPointerObj(SWIG_as_voidptr(ptr), SWIGTYPE_p_void, 0 | 0 );
}
I am not convinced this is the best way to do this, but it works. The "goo" scenario is much more troublesome. Here I need to process some generic input and while I can implement conversion logic following the example in the code generated by the SWIG itself for function goo above:
void
goo( PyObject* o )
{
...
if( o is actually void* ) { <==== <1>
void* ptr;
int res = SWIG_ConvertPtr( o, SWIG_as_voidptrptr(&ptr), 0, 0);
if( SWIG_IsOK(res) ) do_goo( ptr );
}
...
}
I do not have any way to implement condition in the line <1>. While for other types I can use functions like: PyInt_Check, PyString_Check etc, for void* I do not have the option to do so. On Python side it is represented by object with type PySwigObject and while it definitely knows that it wraps void* (I can see it if I print the value) I do not know how to access this information from PyObject*.
I would appreciate any help figuring out any way to deal with this problem.
Thank you,
Am I right in thinking you essentially want some way of testing whether the incoming python object is a 'void*'? Presumably this is the same as testing if it is a pointer to anything?
If so, what about using a boost::shared_ptr (or other pointer container) to hold your generic object? This would require some C++ code rewriting though, so might not be feasible.
Swig's manual is kinda confusing to me. I am wrapping my C library to python so that I can test my C code in python. Now I want to know how I can access the C pointer address in Python,
for example, this is the code I have
typedef struct _buffer_t {
char buf[1024];
char *next_ptr;
} buffer_t;
void parse_buffer(buffer_t * buf_p) {
buf_p -> next_ptr ++;
}
What I wanted to do is below, in C code
buffer_t my_buf;
my_buf.next_ptr = my_buf.buf;
parse_buffer(&my_buf);
expect_equal(&(my_buf.buf)+1, my_buf.next_ptr);
How do I do the same thing in Python?
After import the SWIG wrapped module, I have the buffer_t class in python.
The problem is that SWIG is trying to wrap the buffer as a String in Python, which is not just a pointer. The synthesised assignment for next_ptr will allocate memory and make a string copy, not just do a pointer assignment. You can work around this in several ways.
The simplest is to use %extend to add a "reset buffer" method in Python:
%extend {
void resetPtr() {
$self->next_ptr=$self->buf;
}
}
which can then be called in Python to make the assignment you want.
Alternatively if you want to force the buffer to be treated as just another pointer you could force SWIG to treat both members as void* instead of char* and char[]. I had hoped that would be as simple as a %apply per type, but it seems not to work correctly for memberin and memberout in my testing:
%apply void * { char *next };
%apply void * { char buf[ANY] };
Given that the memberin/memberout typemaps are critical to making this work I think %extend is by far the cleanest solution.