Boost python getter/setter with the same name - python

I am wrapping C++ classes with boost-python and I am wondering is there is a better way to do it than what I am doing now.
The problem is that the classes have getters/setters that have the same name and there doesn't seem to be a painless way to wrap this with boost-python.
Here is a simplified version of the problem. Given this class:
#include <boost/python.hpp>
using namespace boost::python;
class Foo {
public:
double
x() const
{
return _x;
}
void
x(const double new_x)
{
_x = new_x;
}
private:
double _x;
};
I would like to do something like:
BOOST_PYTHON_MODULE(foo)
{
class_<Foo>("Foo", init<>())
.add_property("x", &Foo::x, &Foo::x)
;
}
This doesn't work because boost-python can't figure out which version of the function to use.
In fact, you can't even do
.def("x", &Foo::x)
for the same reason.
I was re-reading the tutorial at boost.org and the section on overloading seemed super promising. Unfortunately it doesn't seem to be what I'm looking for.
In the overloading section, it mentions a BOOST_PYTHON_MEMBER_FUNCTION_OVERLOADS macro that works like this:
if there were another member function in Foo that took defaulted arguments:
void z(int i=42)
{
std::cout << i << "\n";
}
you can then use the macro:
BOOST_PYTHON_MEMBER_FUNCTION_OVERLOADS(z_member_overloads, z, 0, 1)
and then in the BOOST_PYTHON_MODULE:
.def("z", &Foo::z, z_member_overloads())
z_member_overloads lets you call def once and it will expose methods to python for both 0 arguments and 1 argument.
I was hoping that this would work for my x() and x(double val) getter/setter, but it doesn't work.
doing:
BOOST_PYTHON_MEMBER_FUNCTION_OVERLOADS(x_member_overloads, x, 0, 1)
...
.def("x", &Foo::x, x_member_overloads())
doesn't compile:
error: no matching member function for call to 'def'
.def("x", &Foo::x, x_member_overloads())
~^~~
Question:
So, is there another macro or something that can make this work?
For completeness, this is how I'm currently handling cases like this:
.add_property(
"x",
make_function(
[](Foo& foo) {
return foo.x();
},
default_call_policies(),
boost::mpl::vector<double, Foo&>()
),
make_function(
[](Foo& foo, const double val) {
foo.x(val);
},
default_call_policies(),
boost::mpl::vector<void, Foo&, double>()
)
)

You can do this by casting to appropriate overload (untested):
class_<Foo>("Foo", init<>())
.add_property("x",
static_cast< double(Foo::*)() const >(&Foo::x), // getter
static_cast< void(Foo::*)(const double) >(&Foo::x)) // setter
;

Related

Is it possible in pybind11 to use py::cast to access an abstract base class?

I have include a minimal working example below - it can be compiled using the typical pybind11 instructions (I use cmake).
I have an abstract base class, Abstract, which is pure virtual. I can easily wrap this in pybind11 using a "trampoline class" (this is well documented by pybind11).
Further, I have a concrete implementation of Abstract, ToBeWrapped, that is also wrapped using pybind11.
My issue is that I have some client code which accepts an arbitrary PyObject* (or, in the case of this example, pybind11's wrapper py::object) and expects to cast this to Abstract*.
However, as illustrated in my example, I am unable to cast the py::object to Abstract*.
I have no problem casting to ToBeWrapped* and then storing that as an Abstract*', however this would require my client code to know ahead of time what kind ofAbstract*` the python interpreter is sending, which defeats the purpose of the abstract base class.
TL;DR
Is it possible to modify this code such that the client accessMethod is able to arbitrarily handle an Abstract* passed from the python interpreter?
#include <pybind11/pybind11.h>
#include <iostream>
namespace py = pybind11;
// abstract base class - cannot be instantiated on its own
class Abstract
{
public:
virtual ~Abstract() = 0;
virtual std::string print() const = 0;
};
Abstract::~Abstract(){}
// concrete implementation of Abstract
class ToBeWrapped : public Abstract
{
public:
ToBeWrapped(const std::string& msg = "heh?")
: myMessage(msg){};
std::string print() const override
{
return myMessage;
}
private:
const std::string myMessage;
};
// We need a trampoline class in order to wrap this with pybind11
class AbstractPy : public Abstract
{
public:
using Abstract::Abstract;
std::string print() const override
{
PYBIND11_OVERLOAD_PURE(
std::string, // return type
Abstract, // parent class
print, // name of the function
// arguments (if any)
);
}
};
// I have client code that accepts a raw PyObject* - this client code base implements its
// own python interpreter, and calls this "accessMethod" expecting to convert the python
// object to its c++ type.
//
// Rather than mocking up the raw PyObject* method (which would be trivial) I elected to
// keep this minimal example 100% pybind11
void accessMethod(py::object obj)
{
// runtime error: py::cast_error
//Abstract* casted = obj.cast<Abstract*>();
// this works
Abstract* casted = obj.cast<ToBeWrapped*>();
}
PYBIND11_MODULE(PyMod, m)
{
m.doc() = R"pbdoc(
This is a python module
)pbdoc";
py::class_<Abstract, AbstractPy>(m, "Abstract")
.def("print", &Abstract::print)
;
py::class_<ToBeWrapped>(m, "WrappedClass")
.def(py::init<const std::string&>())
;
m.def("access", &accessMethod, "This method will attempt to access the wrapped type");
}
You need to declare the hierarchy relationship, so this:
py::class_<ToBeWrapped>(m, "WrappedClass")
should be:
py::class_<ToBeWrapped, Abstract>(m, "WrappedClass")

Boost Python 2: Constructors using `std::string &`

I have a legacy code in C++ (which would be a huge pain to edit) and I need to use it in Python 2 for speed reasons.
I have two classes. One is responsible for loading huge amount of data from memory, in a form of std::string and converting it to internal representation MiddleClass. Second one is converting it from internal representation MiddleClass back to std::string.
class Load {
Load(const std::string & data) { ... };
MiddleClass load() { ... };
};
class Save {
Save(std::string & data) { .... };
void save(const MiddleClass & middleclass) { ... };
};
My goal is, to use this setup in Python 2 like this:
import datahandler # my lib
import requests
request = request.get("url-to-data")
loader = datahandler.Load(request.content) # my C++ class Load
internal_representation = loader.load()
.
.
.
result_variable = str() # or None or something not important
saver = datahandler.Save(result_variable) # my C++ class Save
saver.save(internal_representation)
How can I achieve this?
I've run into trouble, right from the start.
Simple variant:
BOOST_PYTHON_MODULE(datahandler)
{
class_<MiddleClass>("MiddleClass");\
// some .defs - not important
class <Load>("Load", init<const std::string &>())
.def("load". &Load::load);
class <Save>("Save", init<std::string &>())
.def("save". &Save::save);
}
Will compile, no worries, but data which are loaded are somehow mangled, which leads me to thinking, that I am doing it terribly wrongly.
Also I found this bit offtopic SO question, which told me, that I can't have std::string &, because Python strings are immutable.
So conclusion: I have no idea what to do now :( Can anyone here help me? Thanks.
Take as reference this working example.
Define your C++ classes. For instance:
class MiddleClass {
public:
explicit MiddleClass(const std::string& data) : parent_data_(data) {}
void print() {
std::cout << parent_data_ << std::endl;
}
private:
std::string parent_data_;
};
class Loader {
public:
explicit Loader(const std::string& data) :
data_(data){
};
MiddleClass load() {
return MiddleClass(data_);
};
private:
std::string data_;
};
Create the boost bindings
boost::python::class_<MiddleClass>("MiddleClass",
boost::python::init<const std::string&>(boost::python::arg("data"), ""))
.def("print_data", &MiddleClass::print);
boost::python::class_<Loader>("Loader",
boost::python::init<const std::string&>(boost::python::arg("data"), ""))
.def("load", &Loader::load);
Install your library in the right python site-package.
Enjoy it in python:
from my_cool_package import MiddleClass, Loader
example_string = "whatever"
loader = Loader(data=example_string)
# Get the middle class
middle_class = loader.load()
# Print the data in the middle class
middle_class.print_data()
The expected output:
whatever
So, I have found a solution. Prove me wrong, but I think, that what am I trying to achieve is impossible.
Python has immutable strings, so passing a "reference" of string to function and expecting ability to change it from inside a function is simply not valid.
Take this code as an example:
variable = "Hello"
def changer(var):
var = "Bye"
changer(variable)
print(variable)
Prints "Hello". In Python, you can't make it work differently. (although to be exact, it is still being passed as a reference, but when you modify Python string, you just create a new one and a new reference).
So, how to get arround this?
Simple! Create a C++ wrapper, that will handle passing reference on std::string and return copy of resulting string. Not very effective, but you probably can't make it better.
Sample code of SaveWrapper class:
class SaveWrapper {
public:
// some constructor
std::string save(MiddleClass & value) {
std::string result;
Save saver(result);
saver.save(value);
return result;
}
};
Which can be easily "ported" to Python!

How do I wrap the operator<< to __str__ in Python using SWIG?

If I want to print information about an object in C++, I'll use the outstream operator <<:
class Foo
{
public:
friend std::ostream& operator<<(std::ostream& out, const Foo& foo);
private:
double bar = 7;
};
inline std::ostream& operator<<(std::ostream& out, const Foo& foo)
{
return out << foo.bar;
}
Then, I can do Foo foo; std::cout << foo << std::endl;. Something equivalent in Python would be implementing __str__ and then saying print(foo). But since the operator is not really a member of Foo, I don't know how to do this in SWIG.
What would I have to write in my interface file to reuse my implementation of the outstream operator for use in print()?
Additionally, is it possible to let SWIG do an automatic redirect of shared_ptr of an object, so that if I somewhere return std::shared_ptr<Foo>, I can still call print(sharedPtrToFoo) and it will call the __str__ or operator<< of the pointed to object?
Something like this ought to work, assuming that you're not using `-builtin':
%extend Foo {
std::string __str__() const {
std::ostringstream out;
out << *$self;
return out.str();
}
}
Note that this is substantially similar to this answer, albeit with the recommendation to use std::string instead of const char * removing a subtle bug.
With regards to shared_ptr it should be that every method of Foo is exposed with shared_ptr<Foo> transparently so I'd expect that to just work.

boost python won't auto-convert char* data members

I'm trying to wrap a C++ api and I'm hitting a roadblock on some char* class members. It seems that boost-python will auto convert char const * and std::string types into python objects (based on this answer), but it balks at char* types. This is the error I get (in python):
TypeError: No to_python (by-value) converter found for C++ type: char*
It turns out that these particular char * members probably should have been declared as char const * since the strings are never altered.
I'm new to boost-python so maybe there is an obvious answer, but I'm not having much luck googling this one.
Is there an easy way to tell boost-python to auto convert these char* members?
(Unfortunately I can't change the declarations of char * to char const * since the API I am wrapping is not under my control.)
UPDATE:
Ok so I think that I need to add a custom converter to handle the char* members. I started writing one:
/** to-python convert for char* */
struct c_char_p_to_python_str
{
static PyObject* convert(char* s) {
return incref(object(const_cast<const char*>(s)).ptr());
}
};
// register the QString-to-python converter
to_python_converter<char*, c_char_p_to_python_str>();
Unfortunately this does not work. This is the error:
error: expected unqualified-id
to_python_converter<char*, c_char_p_to_python_str>();
^
Looking at the docs I can see that the template args have this signature:
template <class T, class Conversion, bool convertion_has_get_pytype_member=false>
Since char* isn't a class I'm guessing that's why this didn't work. Anyone have some insight?
UPDATE2:
Nope. Turns out to_python_converter needs to get called inside of the BOOST_PYTHON_MODULE call.
I got the to_python_converter working (with some modifications). I also wrote a function to convert form python and registered it with converter::registry::push_back. I can see my to_python code running, but the from_python code never seems to run.
Let's assume we're wrapping some third-party API, and set aside the awfulness of having those pointers exposed and mucking with them from the outside.
Here's a short proof of concept:
#include <boost/python.hpp>
namespace bp = boost::python;
class example
{
public:
example()
{
text = new char[1];
text[0] = '\0';
}
~example()
{
delete[] text;
}
public:
char* text;
};
char const* get_example_text(example* e)
{
return e->text;
}
void set_example_text(example* e, char const* new_text)
{
delete[] e->text;
size_t n(strlen(new_text));
e->text = new char[n+1];
strncpy(e->text, new_text, n);
e->text[n] = '\0';
}
BOOST_PYTHON_MODULE(so02)
{
bp::class_<example>("example")
.add_property("text", &get_example_text, &set_example_text)
;
}
Class example owns text, and is responsible for managing the memory.
We provide an external getter and setter function. The getter is simple, it just provides read access to the string. The setter frees the old string, allocates new memory of appropriate size, and copies the data.
Here's a simple test in python interpreter:
>>> import so02
>>> e = so02.example()
>>> e.text
''
>>> e.text = "foobar"
>>> e.text
'foobar'
Notes:
set_example_text() could perhaps take std::string or bp::object so that we have the lenght easily available, and potentially allow assignment from more than just strings.
If there are many member variables to wrap and the getter/setter pattern is similar, generate the code using templates, or even just few macros.
There may be a way to do this with the converters, I'll have a look into that tomorrow. However, as we're dealing with memory management here, i'd personally prefer to handle it this way, as it's much more obvious what's happening.
This expands on Dan's answer. I wrote some macro definitions which generate lambda expressions. The benefits of this approach are that it is not tied to a particular type or member name.
In the API I am wrapping, I have a few hundred classes to wrap. This allows me to make a single macro call for every char* class member.
Here is a modified version of Dan's example code:
#include <boost/python.hpp>
namespace bp = boost::python;
#define ADD_PROPERTY(TYPE, ATTR) add_property(#ATTR, SET_CHAR_P(TYPE, ATTR), \
GET_CHAR_P(TYPE, ATTR))
#define SET_CHAR_P(TYPE, ATTR) +[](const TYPE& e){ \
if (!e.ATTR) return ""; \
return (const char*)e.ATTR; \
}
#define GET_CHAR_P(TYPE, ATTR) +[](TYPE& e, char const* new_text){ \
delete[] e.ATTR; \
size_t n(strlen(new_text)); \
e.ATTR = new char[n+1]; \
strncpy(e.ATTR, new_text, n); \
e.ATTR[n] = '\0'; \
}
class example
{
public:
example()
{
text = new char[1];
text[0] = '\0';
}
~example()
{
delete[] text;
}
public:
char* text;
};
BOOST_PYTHON_MODULE(topics)
{
bp::class_<example>("example")
.ADD_PROPERTY(example, text);
}

Writing a Python module using C/API and C++ classes

I am new to the business of writing custom Python modules and I am a bit confused how Capsules work. I use Python 2.7.6 from the system OSX installation and try to use Capsules (as recommended for Python > 2.7) for passing pointers around (before they used PyCObject for that). My code does not work at the moment and I would like to get some insights how things should be handled in principle here. The code should define a class LuscherClm and I want be able to do the following:
>>> c40=Luscher(4,0)
>>>
>>> c40(0.12)
>>> <print the result of the evaluation>
First question: at the moment I would have to do something like:
>>> c40=Luscher.init(4,0)
>>>
>>> c40.eval(0.12)
Segfault
My first question is therefore: how do I have to modify the method table to have more operator-style casts instead of the member functions init and eval.
However, my code has other problems and here is the relevant part (the underlying C++ class works smoothly, I use it in production a lot):
The destructor:
//destructor
static void clm_destruct(PyObject* capsule){
void* ptr=PyCapsule_GetPointer(capsule,"zetfunc");
Zetafunc* zetptr=static_cast<Zetafunc*>(ptr);
delete zetptr;
return;
}
The constructor: it returns the pointer to the capsule. I do not know whether this is correct. Because in this case when I call, clm=LuscherClm.init(l,m), the clm object is a PyCapsule and has no attribute eval so that I cannot call clm.eval(x) on that. How should this be handled?
//constructor
static PyObject* clm_init(PyObject* self, PyObject *args){
//return value
PyObject* result=NULL;
//parse variables
unsigned int lval=0;
int mval=0;
if(!PyArg_ParseTuple(args,"li",&lval,&mval)){
::std::cout << "Please specify l and m!" << ::std::endl;
return result;
}
//class instance:
Zetafunc* zetfunc=new Zetafunc(lval,mval);
instanceCapsule=PyCapsule_New(static_cast<void*> (zetfunc),"zetfunc",&clm_destruct);
return instanceCapsule;
}
So how is the capsule passed to the evaluate function? the code below is not correct since I have not updated it after moving from CObjects to Capsules. Shall the capsule be a global variable (I do not like that) or how can I pass it to the evaluation function? Or shall I call it on self, but what is self at the moment?
//evaluate the function
static PyObject* clm_evaluate(PyObject* self, PyObject* args){
//get the PyCObject from the capsule:
void* tmpzetfunc=PyCapsule_GetPointer(instanceCapsule,"zetfunc");
if (PyErr_Occurred()){
std::cerr << "Some Error occured!" << std::endl;
return NULL;
}
Zetafunc* zetfunc=static_cast< Zetafunc* >(tmpzetfunc);
//parse value:
double x;
if(!PyArg_ParseTuple(args,"d",&x)){
std::cerr << "Specify a number at which you want to evaluate the function" << std::endl;
return NULL;
}
double result=(*zetfunc)(x).re();
//return the result as a packed function:
return Py_BuildValue("d",result);
}
//methods
static PyMethodDef LuscherClmMethods[] = {
{"init", clm_init, METH_VARARGS, "Initialize clm class!"},
{"eval", clm_evaluate, METH_VARARGS, "Evaluate the Zeta-Function!"},
{NULL, NULL, 0, NULL} /* Sentinel */
};
Python < 3 initialisation function:
PyMODINIT_FUNC
initLuscherClm(void)
{
PyObject *m = Py_InitModule("LuscherClm", LuscherClmMethods);
return;
}
Can you explain to me what is wrong and why? I would like to stay away from SWIG or boost if possible, since this module should be easily portable and I want to avoid having to install additional packages every time I want to use it somewhere else.
Further: what is the overhead produced by the C/API in calling the function? I need to call it an order of O(10^6) times and I would still like it to be fast.
Ok, I am using boost.python now but I get a segfault when I run object.eval(). That is my procedure now:
BOOST_PYTHON_MODULE(threevecd)
{
class_< threevec<double> >("threevecd",init<double,double,double>());
}
BOOST_PYTHON_MODULE(LuscherClm)
{
class_<Zetafunc>("LuscherClm",init<int,int, optional<double,threevec<double>,double,int> >())
.def("eval",&Zetafunc::operator(),return_value_policy<return_by_value>());
boost::python::to_python_converter<dcomplex,dcomplex_to_python_object>();
}
dcomplex is my own complex number implementation. So I had to write a converter:
struct dcomplex_to_python_object
{
static PyObject* convert(dcomplex const& comp)
{
if(fabs(comp.im())<std::numeric_limits<double>::epsilon()){
boost::python::object result=boost::python::object(complex<double>(comp.re(),comp.im()));
return boost::python::incref(result.ptr());
}
else{
return Py_BuildValue("d",comp.re());
}
}
};
Complex128 is a numpy extension which is not understood by boost. So my questions are:
1) how can I return a complex number as a python datatype (is complex a standard python type?)
2) Why do I get a segfault. My result in my testcase is real so it should default to the else statement. I guess that the pointer runs out of scope and thats it. But even in the if-case (where I take care about ref-increments), it segfaults. Can someone help me with the type conversion issue?
Thanks
Thorsten
Ok, I got it. The following converter does the job:
struct dcomplex_to_python_object
{
static PyObject* convert(dcomplex const& comp)
{
PyObject* result;
if(std::abs(comp.im())<=std::numeric_limits<double>::epsilon()){
result=PyFloat_FromDouble(comp.re());
}
else{
result=PyComplex_FromDoubles(comp.re(),comp.im());
}
Py_INCREF(result);
return result;
}
};
Using this converter and the post by Wouter, I suppose my question is answered. Thanks

Categories

Resources