I'm using some example code I found on the web to try to set up libboost so I can call into cpp routines with Python code. (I intend to use python to write my UI and cpp for my backend for this application) Boost seems simple enough to use, but it's currently not exposing any functionality.
#include <boost/python.hpp>
char const* greet()
{
return "hello, world";
}
BOOST_PYTHON_MODULE(hello_ext)
{
using namespace boost::python;
def("greet", greet);
}
I compile this using the line g++ -c hello.cpp -I/usr/include/python3.6/
(that last include is necessary because I'm on ubuntu, where g++ doesn't locate python correctly, and I'm too lazy to add it to my path)
import hello_ext
print(hello_ext.greet())
I run this using python3, and I get the following output
File "hello.py", line 1, in <module>
import hello_ext
ModuleNotFoundError: No module named 'hello_ext'
This implies to me that Boost is not properly exposing the C++ functionality I created a module for. What am I missing here? I've already tried exposing the functionality to python in a header file instead of in the cpp file, and that has the same result.
Also, if anyone looking at this post is having issues accessing functionality within their module, but it seems like the module is being exposed, make sure python doesn't already have a default module with the same name which would take precedence over your module.
According to boost::python docs your code should be compiled as a shared library to be used in python:
g++ hello.cpp -I /usr/include/python3.6 -lboost_python-py36 -shared -fPIC -o hello_ext.so
Note that the name of the shared library must be the same as the name of your python module. You also forgot to link your code with boost lib.
Related
I'm on MacOs and I'm trying to embed python inside a c shared library. Disclaimer: there are two installations of python involved with this question. One is my "main" python, so to speak, installed at /Library/Frameworks/Python.framework/Versions/3.10/bin/python3, that will be referred as "main python"; and the other one is a subdirectory of my current working directory containing python source code that i will build and embed, and it will be referred as embedded python.
For clarity, I have reproduced a minimalistic example of my problem that gives the same results.
Consider that the current working directory contains:
python310 (a directory containing python source code)
I configure it using the following command:
./configure --prefix=__path_to_the_directory_containing_python_source__ --exec-prefix=__path_to_the_directory_containing_python_source__ --enable-shared --enable-optimizations --with-lto
and compiled it using make && make altinstall
I have used altinstall because I will use this distribution only inside my application and I didn't want to replace my normal python installation.
test_interpreter.c:
#include "__path_to_the_directory_containing_python_source__/Include/Python.h"
int test() {
Py_SetProgramName(L"__path_to_the_directory_containing_python_source__/python.exe");
Py_Initialize();
// adds the current directory to sys.path to make test_import.py importable
PyObject* sysPath = PySys_GetObject("path"); // borrowed reference
PyObject* cur_dir = PyUnicode_FromString(""); // new reference
PyList_Append(sysPath, cur_dir);
PyRun_SimpleString("import test_import");
Py_DECREF(cur_dir);
Py_Finalize();
return 0;
}
test_import.py:
# will be called by embedded python
print("Inside python")
...
test_interpreter.py
# will be called by main python
import ctypes
lib = ctypes.CDLL("/__path_to_current_directory__/libtest.so")
tp = ctypes.CFUNCTYPE(ctypes.c_int)
test = tp(("test", lib))
test()
Now the problem is that, when test_import.py imports some builtin modules (note it doesn't happen with every module but just some), i get errors like segfault (e.g. when importing ctypes) or abort trap (e.g. when importing sqlite3). The most interesting part is that the errors do not occur if the interpreter is embedded inside an executable instead than a shared library.
So, if I compile test_interpreter.c into libtest.so using: gcc -L__path_to_the_directory_containing_python_source__ -lpython3.10 -shared -install_name __current_directory_path__/libtest.so -o libtest.so test_interpreter.c,
then modify test_import.py to for example
# will be called by embedded python
print("Inside python")
import decimal
and execute python3 test_interpreter.py (using main python; version is still 3.10) i get:
Inside python
Segmentation fault: 11
Other modules that give me the same error message are:
tkinter
ctypes
Also, if it can be usefull, I managed to understand that when importing ctypes inside the embedded interpreter the error occures when the line from _ctypes import Union, Structure, Array (of ctypes' __ init __.py) is executed.
If i modify test_interpreter.py to:
print("Inside python")
import sqlite3
and run the same command i get:
Inside python
Python(86563,0x10a7105c0) malloc: *** error for object 0x101d56360: pointer being freed was not allocated
Python(86563,0x10a7105c0) malloc: *** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Other modules that give me the same error message are:
dbm
distutils
random
json
zoneinfo
base64
csv
calendar
pickle
Note that if I compile test_interpreter.c as an executable (after changing test function name to main) using gcc -L__path_to_the_directory_containing_python_source__ -o test test_interpreter.c, or if I run the python executable (without embedding it) and try to import those module, everything works fine.
Thanks in advance to everyone who will try to understand what's going on.
I'm trying to use pybind11 for the first time by following the tutorial documentation.
As the documentation says, I created the following example.cpp file with the following content:
#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
PYBIND11_MODULE(example, m) {
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function which adds two numbers");
}
And since I'm on macOS, based on this tutorial, I compiled the example file with the following command: (not sure if this is correct but seems to work)
c++ -O3 -Wall -shared -std=c++11 -undefined dynamic_lookup -I /usr/local/Cellar/pybind11/2.4.3/include example.cpp -o example`python3-config --extension-suffix --includes`
This command generates the example.cpython-37m-darwin.so file in the same directory.
And then I created example.py with the following content:
import example
example.add(1, 2)
But when I run the script with python example.py, it shows the following error:
Traceback (most recent call last):
File "example.py", line 1, in <module>
import example
File "/Users/cuinjune/pybind11/build/example.py", line 2, in <module>
example.add(1, 2)
AttributeError: 'module' object has no attribute 'add'
What is wrong and how to fix this?
EDIT: Previously, it somehow generated example.pyc file but I don't know how the file was generated and now it just says: ImportError: No module named example.
This line:
/Users/cuinjune/pybind11/build/example.py
says that there is (or was) another file in your work directory (or perhaps /Users/cuinjune/pybind11/build is in your PYTHONPATH) that is a pure Python file, and unrelated to your built example.cpython-37m-darwin.so. When that example.py was imported, presumably some time in the past, the example.pyc you saw was generated. The .pyc remains fully functional standalone, even when the originating example.py was removed. The .pyc file will have the full directory to the original file hardcoded in it, and thus will continue to refer to that file, even when it is removed.
From your update, now all other example.py, .pyc, .pyo are gone, right?
First make sure that the python interpreter you are running matches the tag that you used to build. For example by running:
python3 -c 'import distutils.sysconfig as ds; print(ds.get_config_var("EXT_SUFFIX"))'
(where I'm assuming you used python3 as the command to start the Python interpreter; if not, then change it to the relevant command).
Second, verify the PYTHONPATH envar to see whether the local directory (either fully spelled out, or .) is in it. If not, add it.
With those two covered, it should work ...
Personally, however, I'd first simplify my life by renaming the module to something more unique (rename both example in the .cpp as well as the target on the CLI).
I'm using a 3rd-Party Vendor who is providing a Windows Driver (DLL) and C Header Files. What I'm trying to do is use SWIG to recompile the header file into a Python Module.
Here are the my files:
- BTICard.i
- BTICard.h
- BTICARD64.dll
- BTICARD64.lib
SWIG interface source
%module BTICard
%include <windows.i>
%{
#define SWIG_FILE_WITH_INIT
#include "BTICard.H"
#define BTICardAPI
%}
In Cygwin, I used the following commands:
swig -python -py3 BTICard.i
Which then generated the following files:
- BTICard.py
- BTICard_wrap.c
In Cygwin, compile for Python Module
gcc -c -fpic BTICARD.H BTICard_wrap.c -I/usr/include/python3.8
Which now allows BTICard to be imported in Python
import BTICard
import ctypes
BTICarddll = ctypes.WinDLL('BTICARD64')
pRec1553 = SEQRECORD1553() # Doesn't initialize
The BTICard.H contains the following:
typedef struct - Used to initialize various field structures
enum - Constant Declarations
According to the SWIG documentation, the typedef structs are supposed to be converted into Python classes. When I tried initializing the class, I got a NameError. I suspect the issue is with my interface file not recognizing these types so it failed to convert them.
Upon further investigation, I tried using the distutils approach and created the setup.py
#!/usr/bin/env python3.8
"""
setup.py file for SWIG
"""
from distutils.core import setup, Extension
example_module = Extension('_BTICard',
sources=['BTICard_wrap.c', 'BTICard.h'],)
setup (name = 'BTICard',
version = '0.1',
author = "TESTER",
description = """BTICard API""",
ext_modules = [example_module],
py_modules = ["BTICard"],
)
To build the package:
$ python3.8 setup.py build_ext --inplace
running build_ext
building '_BTICard' extension
error: unknown file type '.h' (from 'BTICard.h')
What's the issue here?
Is there a way I can access the Python source file after creating the object from gcc?
All I am trying to do is to validate a separate Python Wrapper that seems to have issues (it's a completely separate topic). Is there another way to create this Python Module?
The .i file isn't including the interface to export. It should look like:
%module BTICard
%{
#include "BTICard.H" // this just makes the interface available to the wrapper.
%}
%include <windows.i>
%include "BTICard.h" // This wraps the interface defined in the header.
setup.py knows about SWIG interfaces, so include the .i file directly as a source. Headers are included by the sources and aren't listed as sources. You may need other options but this should get you on the right track. You'll likely need the DLLs export library (BTICard.lib) and need to link to that as well:
example_module = Extension('_BTICard',
sources=['BTICard.i'],
libraries=['BTICard.lib'])
I'm trying to embed Cython code into C following O'reilly Cython book chapter 8. I found this paragraph on Cython's documentation but still don't know what should I do:
If the C code wanting to use these functions is part of more than one shared library or executable, then import_modulename() function needs to be called in each of the shared libraries which use these functions. If you crash with a segmentation fault (SIGSEGV on linux) when calling into one of these api calls, this is likely an indication that the shared library which contains the api call which is generating the segmentation fault does not call the import_modulename() function before the api call which crashes.
I'm running Python 3.4, Cython 0.23 and GCC 5 on OS X. The source code are transcendentals.pyx and main.c:
main.c
#include "transcendentals_api.h"
#include <math.h>
#include <stdio.h>
int main(int argc, char **argv)
{
Py_SetPythonHome(L"/Users/spacegoing/anaconda");
Py_Initialize();
import_transcendentals();
printf("pi**e: %f\n", pow(get_pi(), get_e()));
Py_Finalize();
return 0;
}
transcendentals.pyx
cdef api double get_pi():
return 3.1415926
cdef api double get_e():
print("calling get_e()")
return 2.718281828
I'm compiling those files using setup.py and Makefile:
setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
setup(
ext_modules=cythonize([
Extension("transcendentals", ["transcendentals.pyx"])
])
)
Makefile
python-config=/Users/spacegoing/anaconda/bin/python3-config
ldflags:=$(shell $(python-config) --ldflags)
cflags:=$(shell $(python-config) --cflags)
a.out: main.c transcendentals.so
gcc-5 $(cflags) $(ldflags) transcendentals.c main.c
transcendentals.so: setup.py transcendentals.pyx
python setup.py build_ext --inplace
cython transcendentals.pyx
clean:
rm -r a.out a.out.dSYM build transcendentals.[ch] transcendentals.so transcendentals_api.h
However, I came to error Segmentation fault: 11. Any idea can help with this? Thanks!
In that Makefile there is
transcendentals.so: setup.py transcendentals.pyx
python setup.py build_ext --inplace
Unless python refers to /Users/spacegoing/anaconda/bin/python3 it should be replaced since the module may be compiled for wrong python version, and cannot thus be loaded.
In main.c there is call import_transcendentals() that does not check the return value i.e. if the import fails or succeeds. In case of failure, get_pi() and get_e() point to invalid memory locations and trying to call them causes a segmentation fault.
Also, the module has to be located somewhere where it can be found. It seems that when embedding, the current directory is not searched for python modules. PYTHONPATH environment variable could be changed to include the directory where transcendentals.so is located.
The following is an altenative way of embedding the code to the C program and sidesteps the import issues since the module code is linked to the executable.
Essentially, a call to PyInit_transcendentals() is missing.
File transcendentals.h will be generated when the cython functions are defined public i.e.
cdef public api double get_pi():
...
cdef public api double get_e():
Your main.c should have the include directives
#include <Python.h>
#include "transcendentals.h"
and then in main
Py_Initialize();
PyInit_transcendentals();
There should be no #include "transcendentals_api.h" and no import_transcendentals()
The first reason is that according to the documentation
However, note that you should include either modulename.h or
modulename_api.h in a given C file, not both, otherwise you may get
conflicting dual definitions.
The second reason is, that since transcendentals.c is linked to the program in
gcc $(cflags) $(ldflags) transcendentals.c main.c
there is no reason to import transcendentals module. The module has to be initialized though, PyInit_transcendentals() does that for Python 3
As my title kind of says, I'm trying to develop a C extension for Python. I followed this tutorial here and I ran the setup.py script. How ever when I run the python interpreter and try to import my newly created module, I get a linker error undefined symbol: py_BuildValue. I also tryed to compile it my self and I got the same errors plus an error saying Py_InitModule3 is undefined. I have installed both python3.2-dev and python3-dev. Here is my test.c code:
#include <python3.2/Python.h>
static PyObject* Test(PyObject* self){
return py_BuildValue("s","This is a test and my first trip into the world of python bindings!!");
}
static PyMethodDef test_funcs[] ={{"testExtensions",(PyCFunction)Test, METH_NOARGS,"This is my First Extension!!"},{NULL}};
void initTest(void){
Py_InitModule3("Test", test_funcs, "Extension module example!");
}
And my setup.py code:
from distutils.core import setup, Extension
setup(name='Test', version='1.0', \
ext_modules=[Extension('Test', ['test.c'])])
That's because the function is called Py_BuildValue rather than py_BuildValue. C is case sensitive. If you check further up in your compile messages, you probably also have a warning there about the function being implicitly declared.