I want to set a Python variable from C++ so that the C++ program can create an object Game* game = new Game(); in order for the Python code to be able to reference this instance (and call functions, etc). How can I achieve this?
I feel like I have some core misunderstanding of the way Python or Boost-Python works.
The line main_module.attr("game") = game is in a try catch statement, and the error (using PyErr_Fetch) is "No to_python (by-value) converter found for C++ type: class Game".
E.g.
class_<Game>("Game")
.def("add", &Game::add)
;
object main_module = import("__main__");
Game* game = new Game();
main_module.attr("game") = game; //This does not work
From Python:
import testmodule
testmodule.game.foo(7)
When dealing with language bindings, one often has to be pedantic in the details. By default, when a C++ object transgresses the language boundary, Boost.Python will create a copy, as this is the safest course of action to prevent dangling references. If a copy should not be made, then one needs to be explicit as to the ownership of the C++ object:
To pass a reference to a C++ object to Python while maintaining ownership in C++, use boost::python::ptr() or boost::ref(). The C++ code should guarantee that the C++ object's lifetime is at least as long as the Python object. When using ptr(), if the pointer is null, then the resulting Python object will be None.
To transfer ownership of a C++ object to Python, one can apply the manage_new_object ResultConverterGenerator, allowing ownership to be transferred to Python. C++ code should not attempt to access the pointer once the Python object's lifetime ends.
For shared ownership, one would need to expose the class with a HeldType of a smart pointer supporting shared semantics, such as boost::shared_ptr.
Once the Python object has been created, it would need to be inserted into a Python namespace to be generally accessible:
From within the module definition, use boost::python::scope to obtain a handle to the current scope. For example, the following would insert x into the example module:
BOOST_PYTHON_MODULE(example)
{
boost::python::scope().attr("x") = ...; // example.x
}
To insert into the __main__ module, one can import __main__. For example, the following would insert x into the __main__ module:
boost::python::import("__main__").attr("x") = ...;
Here is an example demonstrating how to directly construct the Python object from C++, transfer ownership of a C++ object to Python, and construct a Python object that references a C++ object:
#include <iostream>
#include <boost/python.hpp>
// Mockup model.
struct spam
{
spam(int id)
: id_(id)
{
std::cout << "spam(" << id_ << "): " << this << std::endl;
}
~spam()
{
std::cout << "~spam(" << id_ << "): " << this << std::endl;
}
// Explicitly disable copying.
spam(const spam&) = delete;
spam& operator=(const spam&) = delete;
int id_;
};
/// #brief Transfer ownership to a Python object. If the transfer fails,
/// then object will be destroyed and an exception is thrown.
template <typename T>
boost::python::object transfer_to_python(T* t)
{
// Transfer ownership to a smart pointer, allowing for proper cleanup
// incase Boost.Python throws.
std::unique_ptr<T> ptr(t);
// Use the manage_new_object generator to transfer ownership to Python.
namespace python = boost::python;
typename python::manage_new_object::apply<T*>::type converter;
// Transfer ownership to the Python handler and release ownership
// from C++.
python::handle<> handle(converter(*ptr));
ptr.release();
return python::object(handle);
}
namespace {
spam* global_spam;
} // namespace
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
// Expose spam.
auto py_spam_type = python::class_<spam, boost::noncopyable>(
"Spam", python::init<int>())
.def_readonly("id", &spam::id_)
;
// Directly create an instance of Python Spam and insert it into this
// module's namespace.
python::scope().attr("spam1") = py_spam_type(1);
// Construct of an instance of Python Spam from C++ spam, transfering
// ownership to Python. The Python Spam instance will be inserted into
// this module's namespace.
python::scope().attr("spam2") = transfer_to_python(new spam(2));
// Construct an instance of Python Spam from C++, but retain ownership of
// spam in C++. The Python Spam instance will be inserted into the
// __main__ scope.
global_spam = new spam(3);
python::import("__main__").attr("spam3") = python::ptr(global_spam);
}
Interactive usage:
>>> import example
spam(1): 0x1884d40
spam(2): 0x1831750
spam(3): 0x183bd00
>>> assert(1 == example.spam1.id)
>>> assert(2 == example.spam2.id)
>>> assert(3 == spam3.id)
~spam(1): 0x1884d40
~spam(2): 0x1831750
In the example usage, note how Python did not destroy spam(3) upon exit, as it was not granted ownership of the underlying object.
Related
I have two c++ libraries that expose python API but using two different frameworks(pybind11 and cython). I need to transfer an object between them (both ways) using python capsules. Since cython and pybind11 use the python capsules different ways, is it even possible to make it work?
I have library A that defines a class Foo and exposes it with pybind11 to python. Library B exposes its API using cython. LibB owns a shared_ptr<Foo> which is a member of one of LibB's classes, say - Bar.
Bar returns the shared_ptr<Foo> member as a PyCapsule which I capture in pybind11 of the Foo class. I'm unpacking the shared_ptr<Foo> from the capsule, returning it to python and the user can operate on this object in python using pybind11 bindings for Foo.
Then I need to put it back in a capsule in pybind11 and return back to Bar.
Bar's python API operates on PyObject and PyCapsule because cython allows that. pybind11 and thus Foo's API does not accept those types and I'm forced to use pybind11::object and pybind11::capsule.
Everything works fine until the moment when I'm trying to use the pybind11::capsule created in pybind11, inside a cython method of class Bar which expects a PyCapsule*.
The shared_ptr<Foo> inside the pybind11::capsule is corrupted and my app crashes.
Has anyone tried to make those 2 libs talk to each other?
libA -> class Foo
namespace foo{
class Foo {
public:
void foo() {...}
}
}
libB -> class Bar
namespace bar {
class Bar {
public:
PyObject* get_foo() {
const char * capsule_name = "foo_in_capsule";
return PyCapsule_New(&m_foo, capsule_name, nullptr);
}
static Bar fooToBar(PyObject * capsule) {
void * foo_ptr = PyCapsule_GetPointer(capsule, "foo_in_capsule");
auto foo = static_cast<std::shared_ptr<foo::Foo>*>(foo_ptr);
// here the shared_ptr is corrupted (garbage numbers returned for use_count() and get() )
std::cout << "checking the capsule: " << foo->use_count() << " " << foo->get() << std::endl
Bar b;
b.m_foo = *foo; //this is what I would like to get
return b;
}
std::shared_ptr<Foo> m_foo;
};
}
pybind11 for Foo
void regclass_foo_Foo(py::module m)
{
py::class_<foo::Foo, std::shared_ptr<foo::Foo>> foo(m, "Foo");
foo.def("foo", &foo::Foo::foo);
foo.def_static("from_capsule", [](py::object* capsule) {
auto* pycapsule_ptr = capsule->ptr();
auto* foo_ptr = reinterpret_cast<std::shared_ptr<foo::Foo>*>(PyCapsule_GetPointer(pycapsule_ptr, "foo_in_capsule"));
return *foo_ptr;
});
foo.def_static("to_capsule", [](std::shared_ptr<foo::Foo>& foo_from_python) {
auto pybind_capsule = py::capsule(&foo_from_python, "foo_in_capsule", nullptr);
return pybind_capsule;
});
}
cython for Bar
cdef extern from "bar.hpp" namespace "bar":
cdef cppclass Bar:
object get_foo() except +
def foo_to_bar(capsule):
b = C.fooToBar(capsule)
return b
putting it all together in python
from bar import Bar, foo_to_bar
from foo import Foo
bar = Bar(... some arguments ...)
capsule1 = bar.get_foo()
foo_from_capsule = Foo.from_capsule(capsule1)
// this is the important part - need to operate on foo using its python api
print("checking if foo works", foo_from_capsule.foo())
// and use it to create another bar object with a (possibly) modified foo object
capsule2 = Foo.to_capsule(foo_from_capsule)
bar2 = foo_to_bar(capsule2)
There's too many unfinished details in your code for me to even test your PyCapsule version. My view is that the issue is with the lifetime of the shared pointers - your capsule points to a shared pointer that's lifetime is tied to the Bar it's in. However, the capsule may outlive that. You should probably be creating a new shared_ptr<Foo>* (with new), pointing to that in your capsule, and defining a destructor (for the capsule) to delete it.
An outline of an alternative approach that I think should work better is as follows:
Write your classes purely in terms of C++ types, so get_foo and foo_to_bar just take/return shared_ptr<Foo>.
Define PyBar as a proper Cython class, rather than using capsules:
cdef public class PyBar [object PyBarStruct, type PyBarType]:
cdef shared_ptr[Bar] ptr
cdef public PyBar PyBar_from_shared_ptr(shared_ptr[Bar] b):
cdef PyBar x = PyBar()
x.ptr = b
return x
This generates a header file containing definitions of PyBarStruct and PyBarType (you probably don't need the latter). I also define a basic module-level function to create a PyBar from a shared pointer (and make that public as well, so it appears in the header too).
Then use PyBind11 to define a custom type-caster to/from shared_ptr<Bar>. load would be something like:
bool load(handle src, bool) {
auto bar_mod = py::import("bar");
auto bar_type = py::getattr(bar_mod,"Bar");
if (!py::isinstance(src,bar_type)) {
return false;
}
// now cast to my PyBarStruct
auto ptr = reinterpret_cast<PyBarStruct*>(src.ptr());
value = ptr->ptr; // access the shared_ptr of the struct
}
while the C++ to Python caster would be something like
static handle cast(std::shared_ptr<Bar> src, return_value_policy /* policy */, handle /* parent */) {
auto bar_mod = py::import("bar"); // See note...
return PyBar_from_shared_ptr(src);
}
I've ensured to include py::import("bar") in both functions because I don't think it's safe to use the Cython-defined functions until the module has been imported somewhere and importing it in the casters does ensure that.
This code is untested so almost certainly has errors, but should give a cleaner approach than PyCapsule.
Is there a smart pointer in C++ to a resource managed by others? I am using pybind11 to wrap C++ code as followed.
class B {};
class A {
public:
// original C++ class interface
A(std::shared_ptr<B> pb) : mb(pb){}
// have to add this for pybind11, since pybind11 doesn't take shared_ptr as argument.
A(B * pb):A(std::shared_ptr<B>(pb)){}
private:
std::shared_ptr<B> mb;
}
namespace py = pybind11;
PYBIND11_MODULE(test, m)
{
py::class_<B>(m, "B")
.def(py::init<>());
py::class_<A>(m, "A")
.def(py::init<B *>());
}
Then in python, I'll use them as followed:
b = B()
a = A(b)
This is fine as long as I don't del a. When I del a in python, the shared_ptr mb I created in C++'s 'A' will try to destroy B object, which is managed by Python and crash. So, my question is if there is some smart pointer in C++ that don't take ownership from a raw pointer? weak_ptr won't work because I'll still have to create a shared_ptr.
Pybind11 is using a unique pointer behind the scenes to manage the C++ object as it thinks it owns the object, and should deallocate the object whenever the Python wrapper object is deallocated. However, you are sharing this pointer with other parts of the C++ code base. As such, you need to make the Python wrapper of the B class to manage the instance of B using a shared pointer. You can do this in the with class_ template. eg.
PYBIND11_MODULE(test, m)
{
py::class_<B, std::shared_ptr<B> >(m, "B")
.def(py::init<>());
py::class_<A>(m, "A")
.def(py::init<std::shared_ptr<B> >());
}
https://pybind11.readthedocs.io/en/stable/advanced/smart_ptrs.html#std-shared-ptr
Does boost::python provide any guarantee when the C++ destructor of a
wrapped object is called considering the moment of reaching the zero
reference count of the corresponding python object?
I am concerned about a C++ object that opens a file for writing and performs the file closing in its destructor. Is it guaranteed that the file is written when all python references to the object are deleted or out of scope?
I mean:
A=MyBoostPythonObject()
del A # Is the C++ destructor of MyBoostPythonObject called here?
My experience suggests that the destructor is always called at this point but could not find any guarantee for this.
Boost.Python makes the guarantee that if the Python object has ownership of the wrapped C++ object, then when the Python object is deleted, the wrapped C++ object will be deleted. The Python object's lifetime is dictated by Python, wherein when an object’s reference count reaches zero, the object may be immediately destroyed. For non-simplistic cases, such as cyclic references, the objects will be managed by the garbage collector, and may be destroyed before the program exits.
One Pythonic solution may be to expose a type that implements the context manager protocol. The content manager protocol is made up of a pair of methods: one which will be invoked when entering a runtime context, and one which will be invoked when exiting a runtime context. By using a context manager, one could control the scope in which a file is open.
>>> with MyBoostPythonObject() as A: # opens file.
... A.write(...) # file remains open while in scope.
... # A destroyed once context's scope is exited.
Here is an example demonstrating exposing a RAII-type class to Python as a context manager:
#include <boost/python.hpp>
#include <iostream>
// Legacy API.
struct spam
{
spam(int x) { std::cout << "spam(): " << x << std::endl; }
~spam() { std::cout << "~spam()" << std::endl; }
void perform() { std::cout << "spam::perform()" << std::endl; }
};
/// #brief Python Context Manager for the Spam class.
class spam_context_manager
{
public:
spam_context_manager(int x): x_(x) {}
void perform() { return impl_->perform(); }
// context manager protocol
public:
// Use a static member function to get a handle to the self Python
// object.
static boost::python::object enter(boost::python::object self)
{
namespace python = boost::python;
spam_context_manager& myself =
python::extract<spam_context_manager&>(self);
// Construct the RAII object.
myself.impl_ = std::make_shared<spam>(myself.x_);
// Return this object, allowing caller to invoke other
// methods exposed on this class.
return self;
}
bool exit(boost::python::object type,
boost::python::object value,
boost::python::object traceback)
{
// Destroy the RAII object.
impl_.reset();
return false; // Do not suppress the exception.
}
private:
std::shared_ptr<spam> impl_;
int x_;
};
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
python::class_<spam_context_manager>("Spam", python::init<int>())
.def("perform", &spam_context_manager::perform)
.def("__enter__", &spam_context_manager::enter)
.def("__exit__", &spam_context_manager::exit)
;
}
Interactive usage:
>>> import example
>>> with example.Spam(42) as spam:
... spam.perform()
...
spam(): 42
spam::perform()
~spam()
I try to expose two different classes to python, but I don't get it to compile. I tried to follow the boost::python example, which works quite well. But if I try to write the wrapper classes for my classes it doesn't work. I have provided two minimal examples below:
struct Base
{
virtual ~Base() {}
virtual std::unique_ptr<std::string> f() = 0;
};
struct BaseWrap : Base, python::wrapper<Base>
{
std::unique_ptr<std::string> f()
{
return this->get_override("f")();
}
};
and
struct Base
{
virtual ~Base() {}
virtual void f() = 0;
};
struct BaseWrap : Base, python::wrapper<Base>
{
void f()
{
return this->get_override("f")();
}
};
The first one does not compile because of the unique pointer(I think boost::python does not use unique pointers?) and the second example complains about the return statement inside the void function. Can someone help me how to solve this problems?
The examples are failing to compile because:
The first example attempts to convert an unspecified type (the return type of override::operator()) to an incompatible type. In particular, Boost.Python does not currently support std::unique_ptr, and hence will not convert to it.
The second example attempts to return the unspecified type mentioned above when the calling function declares that it returns void.
From a Python perspective, strings are immutable, and attempting to transferring ownership of a string from Python to C++ violates semantics. However, one could create a copy of a string within C++, and pass ownership of the copied string to C++. For example:
std::unique_ptr<std::string> BaseWrap::f()
{
// This could throw if the Python method throws or the Python
// method returns a value that is not convertible to std::string.
std::string result = this->get_override("f")();
// Adapt the result to the return type.
return std::unique_ptr<std::string>(new std::string(result));
}
The object returned from this->get_override("f")() has an unspecified type, but can be used to convert to C++ types. The invocation of the override will throw if Python throws, and the conversion to the C++ type will throw if the object returned from Python is not convertible to the C++ type.
Here is a complete example demonstrating two ways to adapt the returned Python object to a C++ object. As mentioned above, the override conversion can be used. Alternatively, one can use boost::python::extract<>, allowing one to check if the conversion will fail before performing the conversion:
#include <memory> // std::unique_ptr
#include <boost/algorithm/string.hpp> // boost::to_upper_copy
#include <boost/python.hpp>
struct base
{
virtual ~base() {}
virtual std::unique_ptr<std::string> perform() = 0;
};
struct base_wrap : base, boost::python::wrapper<base>
{
std::unique_ptr<std::string> perform()
{
namespace python = boost::python;
// This could throw if the Python method throws or the Python
// method returns a value that is not convertible to std::string.
std::string result = this->get_override("perform")();
// Alternatively, an extract could be used to defer extracting the
// result.
python::object method(this->get_override("perform"));
python::extract<std::string> extractor(method());
// Check that extractor contains a std::string without throwing.
assert(extractor.check());
// extractor() would throw if it did not contain a std::string.
assert(result == extractor());
// Adapt the result to the return type.
return std::unique_ptr<std::string>(new std::string(result));
}
};
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
python::class_<base_wrap, boost::noncopyable>("Base", python::init<>())
.def("perform", python::pure_virtual(&base::perform))
;
python::def("make_upper", +[](base* object) {
auto result = object->perform(); // Force dispatch through base_wrap.
assert(result);
return boost::to_upper_copy(*result);
});
}
Interactive usage:
>>> import example
>>> class Derived(example.Base):
... def perform(self):
... return "abc"
...
>>> derived = Derived()
>>> assert("ABC" == example.make_upper(derived))
When a class contains a pointer to another, the contents of the other class appear to be inconstistently reported by SWIG. Here is the smallest reproducible example (SSCCE):
Config.h:
class Config
{
int debug;
public:
void showDebug(void);
};
class ConfigContainer
{
Config *config;
public:
ConfigContainer(Config *);
void showDebug(void);
};
Config.cpp:
#include <iostream>
#include "Config.h"
using namespace std;
void Config::showDebug(void) {
cout << "debug address: " << &debug << " contents: " << debug << endl;
}
ConfigContainer::ConfigContainer(Config *cfg)
{
config= cfg;
}
void ConfigContainer::showDebug(void)
{
config->showDebug();
}
Now when I translate this with SWIG to Python, I get this:
>>>>c = ConfigContainer(Config())
>>>>c.showDebug()
debug address: 0xabf380 contents: 11586464
>>> c.showDebug()
debug address: 0xabf380 contents: 11067216
When I run this sequence in C++ alone, the contents reported are the same. But with SWIG, even though the address is the same, SWIG appears to corrupt the value inside that address.
The responder says, it is because Config *config member of ConfigContainer, has reference count decremented in Python, once showDebug() is called.
How do I tell SWIG, to tell Python to leave config member alone?
When Python creates your Config object, it doesn't hold a permanent reference to it. Your ConfigContainer class just holds a simple pointer, so even in C++, if you don't keep the object alive the ConfigContainer won't know about it.
The following line has Python creating a temporary Config object that is destroyed when the line completes:
c = ConfigContainer(Config())
You can see this if you add footprinting to the constructors and destructors, like I did below:
>>> import x
>>> c = x.ConfigContainer(x.Config())
__cdecl Config::Config(void)
__cdecl ConfigContainer::ConfigContainer(class Config *)
__cdecl Config::~Config(void)
So now ConfigContainer holds a destroyed pointer. The simple solution is to hold on to a reference to Config until you are done.
>>> import x
>>> c = x.Config()
__cdecl Config::Config(void)
>>> cc = x.ConfigContainer(c)
__cdecl ConfigContainer::ConfigContainer(class Config *)
The complicated solution is to implement reference counting (see 6.25 C++ reference counted objects - ref/unref feature).
Another solution is to use SWIG's support for std::shared_ptr by changing the container:
class ConfigContainer
{
std::shared_ptr<Config> config;
public:
ConfigContainer(std::shared_ptr<Config>&);
~ConfigContainer();
void showDebug(void);
};
The following is required in the interface file:
%include <std_shared_ptr.i>
%shared_ptr(Config) // This instantiates the template for SWIG
Now Config will be reference counted:
>>> import x
>>> cc = x.ConfigContainer(x.Config())
__cdecl Config::Config(void)
__cdecl ConfigContainer::ConfigContainer(class std::shared_ptr<class Config> &)
ConfigContainer now holds a reference to Config. Destroying the container destroys the last reference to Config:
>>> del cc
__cdecl ConfigContainer::~ConfigContainer(void)
__cdecl Config::~Config(void)
But if Python has its own reference only the container is destroyed when Python is done with it:
>>> c = x.Config()
__cdecl Config::Config(void)
>>> cc = x.ConfigContainer(c)
__cdecl ConfigContainer::ConfigContainer(class std::shared_ptr<class Config> &)
>>> del cc
__cdecl ConfigContainer::~ConfigContainer(void)
>>> del c
__cdecl Config::~Config(void)
This looks like a lifetime issue.
If a pointer the object you pass to a function is going to be kept stored somewhere you need to inform about this the Python runtime by incrementing the reference counter.
If you don't do this then the object will be possibly destroyed after the first call is terminated and so your config pointer is actually pointing to memory that Python thinks is reusable.
What you are observing is that the memory has been actually reused for other objects (so you see the content changing).
In cpython object lifetime is managed using reference counters (with a separate collection algorithm to solve the problem of reference loops). Each object has a counter of how many other objects are pointing to it and this counter must be kept correct at every use.
If you pass to a function an object and the function code stores the pointer to the object somewhere then it should also increment the reference counter for the pointed object. If it fails to notify the object of the new stored reference the problem is that the object could be destroyed leaving a dangling pointer to unallocated (or later reused) memory.
The opposite problem (incrementing the reference too much) is a memory leak, because Python memory manager will not reclaim the object memory even if no one is using it if the reference counter is greater than zero.
See http://docs.python.org/2/c-api/refcounting.html and related documentation.