C++ destructor calling of boost::python wrapped objects - python

Does boost::python provide any guarantee when the C++ destructor of a
wrapped object is called considering the moment of reaching the zero
reference count of the corresponding python object?
I am concerned about a C++ object that opens a file for writing and performs the file closing in its destructor. Is it guaranteed that the file is written when all python references to the object are deleted or out of scope?
I mean:
A=MyBoostPythonObject()
del A # Is the C++ destructor of MyBoostPythonObject called here?
My experience suggests that the destructor is always called at this point but could not find any guarantee for this.

Boost.Python makes the guarantee that if the Python object has ownership of the wrapped C++ object, then when the Python object is deleted, the wrapped C++ object will be deleted. The Python object's lifetime is dictated by Python, wherein when an object’s reference count reaches zero, the object may be immediately destroyed. For non-simplistic cases, such as cyclic references, the objects will be managed by the garbage collector, and may be destroyed before the program exits.
One Pythonic solution may be to expose a type that implements the context manager protocol. The content manager protocol is made up of a pair of methods: one which will be invoked when entering a runtime context, and one which will be invoked when exiting a runtime context. By using a context manager, one could control the scope in which a file is open.
>>> with MyBoostPythonObject() as A: # opens file.
... A.write(...) # file remains open while in scope.
... # A destroyed once context's scope is exited.
Here is an example demonstrating exposing a RAII-type class to Python as a context manager:
#include <boost/python.hpp>
#include <iostream>
// Legacy API.
struct spam
{
spam(int x) { std::cout << "spam(): " << x << std::endl; }
~spam() { std::cout << "~spam()" << std::endl; }
void perform() { std::cout << "spam::perform()" << std::endl; }
};
/// #brief Python Context Manager for the Spam class.
class spam_context_manager
{
public:
spam_context_manager(int x): x_(x) {}
void perform() { return impl_->perform(); }
// context manager protocol
public:
// Use a static member function to get a handle to the self Python
// object.
static boost::python::object enter(boost::python::object self)
{
namespace python = boost::python;
spam_context_manager& myself =
python::extract<spam_context_manager&>(self);
// Construct the RAII object.
myself.impl_ = std::make_shared<spam>(myself.x_);
// Return this object, allowing caller to invoke other
// methods exposed on this class.
return self;
}
bool exit(boost::python::object type,
boost::python::object value,
boost::python::object traceback)
{
// Destroy the RAII object.
impl_.reset();
return false; // Do not suppress the exception.
}
private:
std::shared_ptr<spam> impl_;
int x_;
};
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
python::class_<spam_context_manager>("Spam", python::init<int>())
.def("perform", &spam_context_manager::perform)
.def("__enter__", &spam_context_manager::enter)
.def("__exit__", &spam_context_manager::exit)
;
}
Interactive usage:
>>> import example
>>> with example.Spam(42) as spam:
... spam.perform()
...
spam(): 42
spam::perform()
~spam()

Related

Is it possible in pybind11 to use py::cast to access an abstract base class?

I have include a minimal working example below - it can be compiled using the typical pybind11 instructions (I use cmake).
I have an abstract base class, Abstract, which is pure virtual. I can easily wrap this in pybind11 using a "trampoline class" (this is well documented by pybind11).
Further, I have a concrete implementation of Abstract, ToBeWrapped, that is also wrapped using pybind11.
My issue is that I have some client code which accepts an arbitrary PyObject* (or, in the case of this example, pybind11's wrapper py::object) and expects to cast this to Abstract*.
However, as illustrated in my example, I am unable to cast the py::object to Abstract*.
I have no problem casting to ToBeWrapped* and then storing that as an Abstract*', however this would require my client code to know ahead of time what kind ofAbstract*` the python interpreter is sending, which defeats the purpose of the abstract base class.
TL;DR
Is it possible to modify this code such that the client accessMethod is able to arbitrarily handle an Abstract* passed from the python interpreter?
#include <pybind11/pybind11.h>
#include <iostream>
namespace py = pybind11;
// abstract base class - cannot be instantiated on its own
class Abstract
{
public:
virtual ~Abstract() = 0;
virtual std::string print() const = 0;
};
Abstract::~Abstract(){}
// concrete implementation of Abstract
class ToBeWrapped : public Abstract
{
public:
ToBeWrapped(const std::string& msg = "heh?")
: myMessage(msg){};
std::string print() const override
{
return myMessage;
}
private:
const std::string myMessage;
};
// We need a trampoline class in order to wrap this with pybind11
class AbstractPy : public Abstract
{
public:
using Abstract::Abstract;
std::string print() const override
{
PYBIND11_OVERLOAD_PURE(
std::string, // return type
Abstract, // parent class
print, // name of the function
// arguments (if any)
);
}
};
// I have client code that accepts a raw PyObject* - this client code base implements its
// own python interpreter, and calls this "accessMethod" expecting to convert the python
// object to its c++ type.
//
// Rather than mocking up the raw PyObject* method (which would be trivial) I elected to
// keep this minimal example 100% pybind11
void accessMethod(py::object obj)
{
// runtime error: py::cast_error
//Abstract* casted = obj.cast<Abstract*>();
// this works
Abstract* casted = obj.cast<ToBeWrapped*>();
}
PYBIND11_MODULE(PyMod, m)
{
m.doc() = R"pbdoc(
This is a python module
)pbdoc";
py::class_<Abstract, AbstractPy>(m, "Abstract")
.def("print", &Abstract::print)
;
py::class_<ToBeWrapped>(m, "WrappedClass")
.def(py::init<const std::string&>())
;
m.def("access", &accessMethod, "This method will attempt to access the wrapped type");
}
You need to declare the hierarchy relationship, so this:
py::class_<ToBeWrapped>(m, "WrappedClass")
should be:
py::class_<ToBeWrapped, Abstract>(m, "WrappedClass")

Transfer a c++ object between cython and pybind11 using python capsules

I have two c++ libraries that expose python API but using two different frameworks(pybind11 and cython). I need to transfer an object between them (both ways) using python capsules. Since cython and pybind11 use the python capsules different ways, is it even possible to make it work?
I have library A that defines a class Foo and exposes it with pybind11 to python. Library B exposes its API using cython. LibB owns a shared_ptr<Foo> which is a member of one of LibB's classes, say - Bar.
Bar returns the shared_ptr<Foo> member as a PyCapsule which I capture in pybind11 of the Foo class. I'm unpacking the shared_ptr<Foo> from the capsule, returning it to python and the user can operate on this object in python using pybind11 bindings for Foo.
Then I need to put it back in a capsule in pybind11 and return back to Bar.
Bar's python API operates on PyObject and PyCapsule because cython allows that. pybind11 and thus Foo's API does not accept those types and I'm forced to use pybind11::object and pybind11::capsule.
Everything works fine until the moment when I'm trying to use the pybind11::capsule created in pybind11, inside a cython method of class Bar which expects a PyCapsule*.
The shared_ptr<Foo> inside the pybind11::capsule is corrupted and my app crashes.
Has anyone tried to make those 2 libs talk to each other?
libA -> class Foo
namespace foo{
class Foo {
public:
void foo() {...}
}
}
libB -> class Bar
namespace bar {
class Bar {
public:
PyObject* get_foo() {
const char * capsule_name = "foo_in_capsule";
return PyCapsule_New(&m_foo, capsule_name, nullptr);
}
static Bar fooToBar(PyObject * capsule) {
void * foo_ptr = PyCapsule_GetPointer(capsule, "foo_in_capsule");
auto foo = static_cast<std::shared_ptr<foo::Foo>*>(foo_ptr);
// here the shared_ptr is corrupted (garbage numbers returned for use_count() and get() )
std::cout << "checking the capsule: " << foo->use_count() << " " << foo->get() << std::endl
Bar b;
b.m_foo = *foo; //this is what I would like to get
return b;
}
std::shared_ptr<Foo> m_foo;
};
}
pybind11 for Foo
void regclass_foo_Foo(py::module m)
{
py::class_<foo::Foo, std::shared_ptr<foo::Foo>> foo(m, "Foo");
foo.def("foo", &foo::Foo::foo);
foo.def_static("from_capsule", [](py::object* capsule) {
auto* pycapsule_ptr = capsule->ptr();
auto* foo_ptr = reinterpret_cast<std::shared_ptr<foo::Foo>*>(PyCapsule_GetPointer(pycapsule_ptr, "foo_in_capsule"));
return *foo_ptr;
});
foo.def_static("to_capsule", [](std::shared_ptr<foo::Foo>& foo_from_python) {
auto pybind_capsule = py::capsule(&foo_from_python, "foo_in_capsule", nullptr);
return pybind_capsule;
});
}
cython for Bar
cdef extern from "bar.hpp" namespace "bar":
cdef cppclass Bar:
object get_foo() except +
def foo_to_bar(capsule):
b = C.fooToBar(capsule)
return b
putting it all together in python
from bar import Bar, foo_to_bar
from foo import Foo
bar = Bar(... some arguments ...)
capsule1 = bar.get_foo()
foo_from_capsule = Foo.from_capsule(capsule1)
// this is the important part - need to operate on foo using its python api
print("checking if foo works", foo_from_capsule.foo())
// and use it to create another bar object with a (possibly) modified foo object
capsule2 = Foo.to_capsule(foo_from_capsule)
bar2 = foo_to_bar(capsule2)
There's too many unfinished details in your code for me to even test your PyCapsule version. My view is that the issue is with the lifetime of the shared pointers - your capsule points to a shared pointer that's lifetime is tied to the Bar it's in. However, the capsule may outlive that. You should probably be creating a new shared_ptr<Foo>* (with new), pointing to that in your capsule, and defining a destructor (for the capsule) to delete it.
An outline of an alternative approach that I think should work better is as follows:
Write your classes purely in terms of C++ types, so get_foo and foo_to_bar just take/return shared_ptr<Foo>.
Define PyBar as a proper Cython class, rather than using capsules:
cdef public class PyBar [object PyBarStruct, type PyBarType]:
cdef shared_ptr[Bar] ptr
cdef public PyBar PyBar_from_shared_ptr(shared_ptr[Bar] b):
cdef PyBar x = PyBar()
x.ptr = b
return x
This generates a header file containing definitions of PyBarStruct and PyBarType (you probably don't need the latter). I also define a basic module-level function to create a PyBar from a shared pointer (and make that public as well, so it appears in the header too).
Then use PyBind11 to define a custom type-caster to/from shared_ptr<Bar>. load would be something like:
bool load(handle src, bool) {
auto bar_mod = py::import("bar");
auto bar_type = py::getattr(bar_mod,"Bar");
if (!py::isinstance(src,bar_type)) {
return false;
}
// now cast to my PyBarStruct
auto ptr = reinterpret_cast<PyBarStruct*>(src.ptr());
value = ptr->ptr; // access the shared_ptr of the struct
}
while the C++ to Python caster would be something like
static handle cast(std::shared_ptr<Bar> src, return_value_policy /* policy */, handle /* parent */) {
auto bar_mod = py::import("bar"); // See note...
return PyBar_from_shared_ptr(src);
}
I've ensured to include py::import("bar") in both functions because I don't think it's safe to use the Cython-defined functions until the module has been imported somewhere and importing it in the casters does ensure that.
This code is untested so almost certainly has errors, but should give a cleaner approach than PyCapsule.

How to call Python from a boost thread?

I have a Python app that calls a C++ boost python library and it all works. However, I have a callback C++ to Python scenario where C++ from a boost thread calls python and I get an access violation on the C++ side. If I do exactly the same callback using the python thread it works perfectly. Therefore I suspect that I can not simply callback Python from C++ using a boost thread but need to do something extra for it to work?
The most likely culprit is that the Global Interpreter Lock (GIL) is not being held by a thread when it is invoking Python code, resulting in undefined behavior. Verify all paths that make direct or indirect Python calls, acquire the GIL before invoking Python code.
The GIL is a mutex around the CPython interpreter. This mutex prevents parallel operations to be performed on Python objects. Thus, at any point in time, a max of one thread, the one that has acquired the GIL, is allowed to perform operations on Python objects. When multiple threads are present, invoking Python code whilst not holding the GIL results in undefined behavior.
C or C++ threads are sometimes referred to as alien threads in the Python documentation. The Python interpreter has no ability to control the alien thread. Therefore, alien threads are responsible for managing the GIL to permit concurrent or parallel execution with Python threads. One must meticulously consider:
The stack unwinding, as Boost.Python may throw an exception.
Indirect calls to Python, such as copy-constructors or destructors
One solution is to wrap Python callbacks with a custom type that is aware of GIL management.
Using a RAII-style class to manage the GIL provides an elegant exception-safe solution. For example, with the following with_gil class, when a with_gil object is created, the calling thread acquires the GIL. When the with_gil object is destructed, it restores the GIL state.
/// #brief Guard that will acquire the GIL upon construction, and
/// restore its state upon destruction.
class with_gil
{
public:
with_gil() { state_ = PyGILState_Ensure(); }
~with_gil() { PyGILState_Release(state_); }
with_gil(const with_gil&) = delete;
with_gil& operator=(const with_gil&) = delete;
private:
PyGILState_STATE state_;
};
And its usage:
{
with_gil gil; // Acquire GIL.
// perform Python calls, may throw
} // Restore GIL.
With being able to manage the GIL via with_gil, the next step is to create a functor that properly manages the GIL. The following py_callable class will wrap a boost::python::object and acquire the GIL for all paths in which Python code is invoked:
/// #brief Helper type that will manage the GIL for a python callback.
///
/// #detail GIL management:
/// * Acquire the GIL when copying the `boost::python` object
/// * The newly constructed `python::object` will be managed
/// by a `shared_ptr`. Thus, it may be copied without owning
/// the GIL. However, a custom deleter will acquire the
/// GIL during deletion
/// * When `py_callable` is invoked (operator()), it will acquire
/// the GIL then delegate to the managed `python::object`
class py_callable
{
public:
/// #brief Constructor that assumes the caller has the GIL locked.
py_callable(const boost::python::object& object)
{
with_gil gil;
object_.reset(
// GIL locked, so it is safe to copy.
new boost::python::object{object},
// Use a custom deleter to hold GIL when the object is deleted.
[](boost::python::object* object)
{
with_gil gil;
delete object;
});
}
// Use default copy-constructor and assignment-operator.
py_callable(const py_callable&) = default;
py_callable& operator=(const py_callable&) = default;
template <typename ...Args>
void operator()(Args... args)
{
// Lock the GIL as the python object is going to be invoked.
with_gil gil;
(*object_)(std::forward<Args>(args)...);
}
private:
std::shared_ptr<boost::python::object> object_;
};
By managing the boost::python::object on the free-space, one can freely copy the shared_ptr without having to hold the GIL. This allows for us to safely use the default generated copy-constructor, assignment operator, destructor, etc.
One would use the py_callable as follows:
// thread 1
boost::python::object object = ...; // GIL must be held.
py_callable callback(object); // GIL no longer required.
work_queue.post(callback);
// thread 2
auto callback = work_queue.pop(); // GIL not required.
// Invoke the callback. If callback is `py_callable`, then it will
// acquire the GIL, invoke the wrapped `object`, then release the GIL.
callback(...);
Here is a complete example demonstrating having a Python extension invoke a Python object as a callback from a C++ thread:
#include <memory> // std::shared_ptr
#include <thread> // std::this_thread, std::thread
#include <utility> // std::forward
#include <boost/python.hpp>
/// #brief Guard that will acquire the GIL upon construction, and
/// restore its state upon destruction.
class with_gil
{
public:
with_gil() { state_ = PyGILState_Ensure(); }
~with_gil() { PyGILState_Release(state_); }
with_gil(const with_gil&) = delete;
with_gil& operator=(const with_gil&) = delete;
private:
PyGILState_STATE state_;
};
/// #brief Helper type that will manage the GIL for a python callback.
///
/// #detail GIL management:
/// * Acquire the GIL when copying the `boost::python` object
/// * The newly constructed `python::object` will be managed
/// by a `shared_ptr`. Thus, it may be copied without owning
/// the GIL. However, a custom deleter will acquire the
/// GIL during deletion
/// * When `py_callable` is invoked (operator()), it will acquire
/// the GIL then delegate to the managed `python::object`
class py_callable
{
public:
/// #brief Constructor that assumes the caller has the GIL locked.
py_callable(const boost::python::object& object)
{
with_gil gil;
object_.reset(
// GIL locked, so it is safe to copy.
new boost::python::object{object},
// Use a custom deleter to hold GIL when the object is deleted.
[](boost::python::object* object)
{
with_gil gil;
delete object;
});
}
// Use default copy-constructor and assignment-operator.
py_callable(const py_callable&) = default;
py_callable& operator=(const py_callable&) = default;
template <typename ...Args>
void operator()(Args... args)
{
// Lock the GIL as the python object is going to be invoked.
with_gil gil;
(*object_)(std::forward<Args>(args)...);
}
private:
std::shared_ptr<boost::python::object> object_;
};
BOOST_PYTHON_MODULE(example)
{
// Force the GIL to be created and initialized. The current caller will
// own the GIL.
PyEval_InitThreads();
namespace python = boost::python;
python::def("call_later",
+[](int delay, python::object object) {
// Create a thread that will invoke the callback.
std::thread thread(+[](int delay, py_callable callback) {
std::this_thread::sleep_for(std::chrono::seconds(delay));
callback("spam");
}, delay, py_callable{object});
// Detach from the thread, allowing caller to return.
thread.detach();
});
}
Interactive usage:
>>> import time
>>> import example
>>> def shout(message):
... print message.upper()
...
>>> example.call_later(1, shout)
>>> print "sleeping"; time.sleep(3); print "done sleeping"
sleeping
SPAM
done sleeping

Set a python variable to a C++ object pointer with boost-python

I want to set a Python variable from C++ so that the C++ program can create an object Game* game = new Game(); in order for the Python code to be able to reference this instance (and call functions, etc). How can I achieve this?
I feel like I have some core misunderstanding of the way Python or Boost-Python works.
The line main_module.attr("game") = game is in a try catch statement, and the error (using PyErr_Fetch) is "No to_python (by-value) converter found for C++ type: class Game".
E.g.
class_<Game>("Game")
.def("add", &Game::add)
;
object main_module = import("__main__");
Game* game = new Game();
main_module.attr("game") = game; //This does not work
From Python:
import testmodule
testmodule.game.foo(7)
When dealing with language bindings, one often has to be pedantic in the details. By default, when a C++ object transgresses the language boundary, Boost.Python will create a copy, as this is the safest course of action to prevent dangling references. If a copy should not be made, then one needs to be explicit as to the ownership of the C++ object:
To pass a reference to a C++ object to Python while maintaining ownership in C++, use boost::python::ptr() or boost::ref(). The C++ code should guarantee that the C++ object's lifetime is at least as long as the Python object. When using ptr(), if the pointer is null, then the resulting Python object will be None.
To transfer ownership of a C++ object to Python, one can apply the manage_new_object ResultConverterGenerator, allowing ownership to be transferred to Python. C++ code should not attempt to access the pointer once the Python object's lifetime ends.
For shared ownership, one would need to expose the class with a HeldType of a smart pointer supporting shared semantics, such as boost::shared_ptr.
Once the Python object has been created, it would need to be inserted into a Python namespace to be generally accessible:
From within the module definition, use boost::python::scope to obtain a handle to the current scope. For example, the following would insert x into the example module:
BOOST_PYTHON_MODULE(example)
{
boost::python::scope().attr("x") = ...; // example.x
}
To insert into the __main__ module, one can import __main__. For example, the following would insert x into the __main__ module:
boost::python::import("__main__").attr("x") = ...;
Here is an example demonstrating how to directly construct the Python object from C++, transfer ownership of a C++ object to Python, and construct a Python object that references a C++ object:
#include <iostream>
#include <boost/python.hpp>
// Mockup model.
struct spam
{
spam(int id)
: id_(id)
{
std::cout << "spam(" << id_ << "): " << this << std::endl;
}
~spam()
{
std::cout << "~spam(" << id_ << "): " << this << std::endl;
}
// Explicitly disable copying.
spam(const spam&) = delete;
spam& operator=(const spam&) = delete;
int id_;
};
/// #brief Transfer ownership to a Python object. If the transfer fails,
/// then object will be destroyed and an exception is thrown.
template <typename T>
boost::python::object transfer_to_python(T* t)
{
// Transfer ownership to a smart pointer, allowing for proper cleanup
// incase Boost.Python throws.
std::unique_ptr<T> ptr(t);
// Use the manage_new_object generator to transfer ownership to Python.
namespace python = boost::python;
typename python::manage_new_object::apply<T*>::type converter;
// Transfer ownership to the Python handler and release ownership
// from C++.
python::handle<> handle(converter(*ptr));
ptr.release();
return python::object(handle);
}
namespace {
spam* global_spam;
} // namespace
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
// Expose spam.
auto py_spam_type = python::class_<spam, boost::noncopyable>(
"Spam", python::init<int>())
.def_readonly("id", &spam::id_)
;
// Directly create an instance of Python Spam and insert it into this
// module's namespace.
python::scope().attr("spam1") = py_spam_type(1);
// Construct of an instance of Python Spam from C++ spam, transfering
// ownership to Python. The Python Spam instance will be inserted into
// this module's namespace.
python::scope().attr("spam2") = transfer_to_python(new spam(2));
// Construct an instance of Python Spam from C++, but retain ownership of
// spam in C++. The Python Spam instance will be inserted into the
// __main__ scope.
global_spam = new spam(3);
python::import("__main__").attr("spam3") = python::ptr(global_spam);
}
Interactive usage:
>>> import example
spam(1): 0x1884d40
spam(2): 0x1831750
spam(3): 0x183bd00
>>> assert(1 == example.spam1.id)
>>> assert(2 == example.spam2.id)
>>> assert(3 == spam3.id)
~spam(1): 0x1884d40
~spam(2): 0x1831750
In the example usage, note how Python did not destroy spam(3) upon exit, as it was not granted ownership of the underlying object.

Wrapping Enums using Boost-Python

I have problem wrapping an Enum for Python using Boost-Python.
Initially I intended to do something like the following in the try-catch (I've inserted my whole code below) statement:
main_namespace["Motion"] = enum_<TestClass::Motion>("Motion")
.value("walk", TestClass::walk)
.value("bike", TestClass::bike)
;
Everything was fine and compilation was done. At run time I got this error (which makes no sense to me):
AttributeError: 'NoneType' object has no attribute 'Motion'
Afterwards I decided to write a Python module using BOOST_PYTHON_MODULE in my code.
After initializing Python interpreter I wanted to use this module right away but didn't know how(?). The following is my whole code:
#include <boost/python.hpp>
#include <iostream>
using namespace std;
using namespace boost::python;
BOOST_PYTHON_MODULE(test)
{
enum_<TestClass::Motion>("Motion")
.value("walk", TestClass::walk)
.value("bike", TestClass::bike)
;
}
int main()
{
Py_Initialize();
try
{
object pyMainModule = import("__main__");
object main_namespace = pyMainModule.attr("__dict__");
//What previously I intended to do
//main_namespace["Motion"] = enum_<TestClass::Motion>("Motion")
// .value("walk", TestClass::walk)
// .value("bike", TestClass::bike)
//;
//I want to use my enum here
//I need something like line below which makes me able to use the enum!
exec("print 'hello world'", main_namespace, main_namespace);
}
catch(error_already_set const&)
{
PyErr_Print();
}
Py_Finalize();
return 0;
}
Anything useful to know about wrapping and using Enums in Python will be appreciated!
Thanks in advance
The AttributeError is the result of trying to create a Python extension type without first setting scope. The boost::python::enum_ constructor states:
Constructs an enum_ object holding a Python extension type derived from int which is named name. The named attribute of the current scope is bound to the new extension type.
When embedding Python, to use a custom Python module, it is often easiest to use PyImport_AppendInittab, then import the module by name.
PyImport_AppendInittab("example", &initexample);
...
boost::python::object example = boost::python::import("example");
Here is a complete example showing two enumerations being exposed through Boost.Python. One is included in a separate module (example) that is imported by main, and the other is exposed directly in main.
#include <iostream>
#include <boost/python.hpp>
/// #brief Mockup class with a nested enum.
struct TestClass
{
/// #brief Mocked enum.
enum Motion
{
walk,
bike
};
// #brief Mocked enum.
enum Color
{
red,
blue
};
};
/// #brief Python example module.
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
python::enum_<TestClass::Motion>("Motion")
.value("walk", TestClass::walk)
.value("bike", TestClass::bike)
;
}
int main()
{
PyImport_AppendInittab("example", &initexample); // Add example to built-in.
Py_Initialize(); // Start interpreter.
// Create the __main__ module.
namespace python = boost::python;
try
{
python::object main = python::import("__main__");
python::object main_namespace = main.attr("__dict__");
python::scope scope(main); // Force main scope
// Expose TestClass::Color as Color
python::enum_<TestClass::Color>("Color")
.value("red", TestClass::red)
.value("blue", TestClass::blue)
;
// Print values of Color enumeration.
python::exec(
"print Color.values",
main_namespace, main_namespace);
// Get a handle to the Color enumeration.
python::object color = main_namespace["Color"];
python::object blue = color.attr("blue");
if (TestClass::blue == python::extract<TestClass::Color>(blue))
std::cout << "blue enum values matched." << std::endl;
// Import example module into main namespace.
main_namespace["example"] = python::import("example");
// Print the values of the Motion enumeration.
python::exec(
"print example.Motion.values",
main_namespace, main_namespace);
// Check if the Python enums match the C++ enum values.
if (TestClass::bike == python::extract<TestClass::Motion>(
main_namespace["example"].attr("Motion").attr("bike")))
std::cout << "bike enum values matched." << std::endl;
}
catch (const python::error_already_set&)
{
PyErr_Print();
}
}
Output:
{0: __main__.Color.red, 1: __main__.Color.blue}
blue enum values matched.
{0: example.Motion.walk, 1: example.Motion.bike}
bike enum values matched.

Categories

Resources