I am trying to setup a CMake project that creates python bindings for its c++ functions using pybind11 on Ubuntu.
The directory structure is:
pybind_test
arithmetic.cpp
arithmetic.h
bindings.h
CMakeLists.txt
main.cpp
pybind11 (github repo clone)
Repo contents (https://github.com/pybind/pybind11)
The CMakeLists.txt file:
cmake_minimum_required(VERSION 3.10)
project(pybind_test)
set(CMAKE_CXX_STANDARD 17)
find_package(PythonLibs REQUIRED)
include_directories(${PYTHON_INCLUDE_DIRS})
include_directories(pybind11/include/pybind11)
add_executable(pybind_test main.cpp arithmetic.cpp)
add_subdirectory(pybind11)
pybind11_add_module(arithmetic arithmetic.cpp)
target_link_libraries(pybind_test ${PYTHON_LIBRARIES})
The repository builds successfully and the file arithmetic.cpython-36m-x86_64-linux-gnu.so is produced. How do I import this shared object file into python?
The documentation in the pybind11 docs has this line
$ c++ -O3 -Wall -shared -std=c++11 -fPIC `python3 -m pybind11 --includes` example.cpp -o example`python3-config --extension-suffix`
but I want to build using CMake and I also don't want to have to specify extra include directories every time I run python to use this module.
How would I import this shared object file into python like a normal python module?
I am using Ubuntu 16.04.
If you open a terminal, go to the directory where arithmetic.cpython-36m-x86_64-linux-gnu.so is located and run python followed by import arithmetic the module will get imported just like any other module.
Another options is to use the method of
import sys
sys.path.insert(0, 'path/to/directory/where/so-file/is')
import arithmetic
With this method you can use both relative and absolute path.
Besides the solution of setting the path in the Python script that is presented by #super, you have two more generic solutions.
Setting PYTHONPATH
There is an environment variable in Linux (and macOS) called PYTHONPATH. If you add the path that contains your *.so to the PYTHONPATH before you call Python, Python will be able to find your library.
To do this:
export PYTHONPATH="/path/that/contains/your/so":"${PYTHONPATH}"
To apply this 'automatically' for every session you can add this line to ~/.bash_profile or ~/.bashrc (see the same reference). In that case, Python will always be able to find your library.
Copying your to a path already in Python's path
You can also 'install' the library. The usual way to do this is to create a setup.py file. If set up correctly you can build and install your library using
python setup.py build
python setup.py install
(Python will know where to put your library. You can 'customize' a bit with an option like --user to use your home-folder, but this doesn't seems to be of particular interest to you.)
The question remains: How to write setup.py? For your case you can actually call CMake. In fact there exists an example that does exactly that: pybind/cmake_example. You can basically copy-paste from there.
Related
I can't get otherwise-available modules seen by a compiled python script. How do I need to change the below process in order to accept either venv-based or global modules?
Steps:
$ python3 -m venv sometest
$ cd sometest
$ . bin/activate
(sometest) $ pip3 install PyCrypto Cython
The basic script, using a non-standard module Crypto:
# hello.py
from Crypto.Cipher import AES
import base64
obj = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')
msg = "The answer is no"
ciphertext = obj.encrypt(msg)
print(msg)
print(base64.b64encode(ciphertext))
(sometest) $ python3 hello.py
The answer is no
b'1oONZCFWVJKqYEEF4JuL8Q=='
Compiling it:
(sometest) $ cython -3 --embed hello.py
(sometest) $ gcc -Os -I /usr/include/python3.5m -o hello hello.c -lpython3.5m -lpthread -lm -lutil -ldl
(sometest) $ $ ./hello
Traceback (most recent call last):
File "hello.py", line 1, in init hello
from Crypto.Cipher import AES
ImportError: No module named 'Crypto'
I don't think it's a problem with using the venv from a cython-embedded-compiled script: the script works elsewhere in the system without the venv (that is, python3 -c 'from Crypto.Cipher import AES' does not fail).
The process works fine otherwise:
(sometest) $ echo 'print("hello world")' > hello2.py
(sometest) $ cython -3 --embed hello2.py
(sometest) $ gcc -Os -I /usr/include/python3.5m -o hello2 hello2.c -lpython3.5m -lpthread -lm -lutil -ldl
(sometest) $ ./hello2
hello world
System:
(sometest) $ python3 --version
Python 3.5.2
(sometest) $ pip3 freeze
Cython==0.29.11
pkg-resources==0.0.0
pycrypto==2.6.1
(sometest) $ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
Usually, a Python-interpreter isn't "standalone" and in order to work it needs its standard libraries (for example ctypes (compiled) or site.py (interpreted)) and also path to other site-packages (for example numpy) must be set.
Albeit it is possible to make a Python-interpter fully standalone by freezing the py-modules and merging all c-extensions (see for example this SO-post) into the resulting executable, it is easier to provide the needed installation to the embeded interpeter. One can download files needed for a "standard" installation from python-homepage (at least for windows), see also this SO-question).
Sometimes finding standard modules/site packages doesn't work out of the box: one has to help the interpreter by setting Python-path, i.e. by adding <..>/sometest/lib/python3.5/site-packages (sometest being a virtual environment root-folder) to sys.path either programmatically in the pyx-file or by setting PYTHONPATH-environment variable prior to start.
Read on for more gory details and alternative solutions.
This answer is for Linux and Python3 (Python 3.7), the basic idea is the same for Windows/MacOS, but some details might be different.
Because venv is used we have the following alternative to solve the issue:
adding <..>/sometest/lib/python3.5/site-packages (sometest being a virtual environment root-folder) to sys.path either programmatically in the pyx-file or by setting PYTHONPATH-environment variable prior to start.
placing the executable with embeded python in a subdirectory of sometest (e.g. bin or creating an own).
using virtualenv instead of venv.
Note: For the executable with the embeded python, it doesn't play any role whether the virtual environment (or which) is activated or not.
Why does the above solves the issue in your scenario?
The problem is, that the (embeded) Python-interpreter needs to figure out where following things are:
platform independent directory/files, e.g. os.py, argparse.py (mostly everything *.py/ *.pyc). Given sys.prefix, the interpreter can figure out where to find them (i.e. in prefix/lib/pythonX.Y).
platform dependent directory/files, e.g. shared libraries. Given sys.exec_prefix the interpreter can figure out where to find them (e.g. shared libraries can be found in in exec_prefix/lib/pythonX.Y/lib-dynload).
The algorithm can be found here and the search is performed, when Py_Initialize is executed. Once these directories are found, sys.path can be constructed.
However, when using venv, there is a pyvenv.cfg-file next to exe or in the parent directory, which ensures that the right Python-Home is found - a good starting point is the home-key in this file.
If Py_NoSiteFlag is not set, Py_Initialize will utilize site.py (it can be found by the interpreter, because sys.prefix is known) , or more precise site.main(), to add site-packages of the virtual environment to sys.path. While doing so, site.py looks for pyvenv.cfg and parses it. However, local site-packages are added to the python-path only when:
If a file named "pyvenv.cfg" exists one directory above
sys.executable, sys.prefix and sys.exec_prefix are set to that
directory and it is also checked for site-packages (sys.base_prefix
and sys.base_exec_prefix will always be the "real" prefixes of the
Python installation).
In your case pyvenv.cfg is not in the directory above, but in the same as the exe - thus the local site-packages, where the libraries were installed via pip, aren't included. Global site-packages aren't included because pyvenv.cfg has key include-system-site-packages = false. Thus there are no site-packages allowed and the installed libraries cannot be found.
However, moving the exe one directory down, would lead to inclusion of the local site-packages to the path.
There are other scenarios possible, what counts is the location of the executable and not which environment is activated.
A: Executable is somewhere, but not inside a virtual environment
This search heuristic works more or less reliable for installed python-interpreters, but can fall for embeded-interpreters or virtual environments (see this issue for much more information).
If python was installed using usual apt install or similar, then it will be found (due to 4. step in the search algorithm) and the system-installation will be used by the embeded interpreter.
However if files were moved around or python was build from source but not installed, then embeded interperter cannot start up:
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: initfsencoding: unable to load the file system codec
ModuleNotFoundError: No module named 'encodings'
In this case, Py_SetPythonHome or setting environment variable $PYTHONHOME are possible solutions.
B: Executable inside a virtual environment, created with virtualenv
Assuming it is the same Python version for virtual environment and the embeded python (otherwise we have the above case), the emebeded exe will use local side-packages. The home search algorithmus will always find the local home, due to this rule:
Step 3. Try to find prefix and exec_prefix relative to argv0_path, backtracking up the path until it is exhausted. This is the most common step to succeed. Note that if prefix and exec_prefix are
different, exec_prefix is more likely to be found; however if
exec_prefix is a subdirectory of prefix, both will be found.
In this case argv0_path is the path to the exe (there is no pyvenv.cfg file!), and the "landmarks" (lib/python$VERSION/os.py and lib/python$VERSION/lib-dynload) will be found, because they are presented as symlinks in the local-home above the exe.
C: Executable two folders deep inside a venv-environment
Going two and not one folder (where it works) down in a venv-environment results in case A: pyvenv.cfg file isn't read while searching for home (too far above), 'venv`-environments lack symlinks to "landmarkers" (localy only side-packages are present) and such step 3 will fail, with 4. step being the only hope.
Corollary: Embeded Python will not work without a right Python-installation, unless among other possibilities:
the needed files are packed into lib\pythonX.Y\* next to the embeding executable or somewhere above (and there is no pyvenv.cfg around to mess the search up).
or pyvenv.cfg used to point the interpreter to the right location.
I need to try python 3.7 with openssl-1.1.1 in Ubuntu 16.04. Both python and openssl versions are pre-release. Following instructions on how to statistically link openssl to python in a previous post, I downloaded the source for opnssl-1.1.1.
Then navigate to the source code for openssl and execute:
./config
sudo make
sudo make install
Then, edit Modules/Setup.dist to uncomment the following lines:
SSL=/usr/local/ssl
_ssl _ssl.c \
-DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \
-L$(SSL)/lib -lssl -lcrypto
Then download python 3.7 source code. Then, navigate inside the source code and execute:
./configure
make
make install
After I execute make install I got this error at the end of the terminal output:
./python: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
generate-posix-vars failed
Makefile:596: recipe for target 'pybuilddir.txt' failed
make: *** [pybuilddir.txt] Error 1
I could not figure out what is the problem and what I need to do.
This has (should have) nothing to do with Python or OpenSSL versions.
Python build process, includes some steps when the newly built interpreter is launched, and attempts to load some of the newly built modules - including extension modules (which are written in C and are actually shared objects (.sos)).
When an .so is loaded, the loader must find (recursively) all the .so files that the .so needs (depends on), otherwise it won't be able to load it.
Python has some modules (e.g. _ssl*.so, _hashlib*.so) that depend on OpenSSL libs. Since you built yours against OpenSSL1.1.1 (the lib names differ from what comes by default on the system: typically 1.0.*), the loader won't be able to use the default ones.
What you need to do, is instruct the loader (check [Man7]: LD.SO(8) for more details) where to look for "your" OpenSSL libs (which are located under /usr/local/ssl/lib). One way of doing that is adding their path in ${LD_LIBRARY_PATH} env var (before building Python):
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/ssl/lib
./configure
make
make install
You might also want to take a look at [Python.Docs]: Configure Python - Libraries options (--with-openssl, --with-openssl-rpath).
Check [SO]: How to enable FIPS mode for libcrypto and libssl packaged with Python? (#CristiFati's answer) for details on a wider problem (remotely) related to yours.
What I have done to fix this :
./configure --with-ssl=./libssl --prefix=/subsystem
sed -i 's!^RUNSHARED=!RUNSHARED=LD_LIBRARY_PATH=/path/to/own/libssl/lib!' Makefile
make
make install
Setting LD_LIBRARY_PATH with export was not sufficient
With Python-3.6.5 and openssl-1.1.0h i get stuck in the same problem. I have uncomment _socket socketmodule.c.
I have a C++ project that I have generated Python bindings for using SWIG. I am now trying to finish the CMake file for the project by adding an install operation. But whenever I finish the install and try to call my functions, I get an error stating foo has no attribute bar().
It has to do with the fact that Python doesn't know where the .so file that the bindings rely on is. If both foo.py and _foo.so are in the same directory I can use the bindings perfectly. I am struggling with figuring out how I am supposed to "install" both the Python bindings and the .so they depend on, all in a portable manner.
Obviously I could just export the install path of the .so to LD_LIBRARY_PATH, but this seems like a hacky work around for what must have a proper solution.
My CMakeLists.txt. I have cut out the bits related to compiling of my C++ lib RTK:
# Project
##
# TODO this actually needs 3.3+
cmake_minimum_required(VERSION 2.6)
project(RTKLIB)
FIND_PACKAGE(SWIG REQUIRED)
INCLUDE(${SWIG_USE_FILE})
FIND_PACKAGE(PythonLibs 3 REQUIRED)
INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_PATH})
find_program(PYTHON "python3" REQUIRED)
include(GNUInstallDirs)
# Variable declarations
##
# Define this directory
set(RTKLIB_ROOT ${CMAKE_CURRENT_SOURCE_DIR})
# Define the build dir
set(RTKLIB_BIN_DIR "${RTKLIB_ROOT}/build")
list(APPEND CMAKE_MODULE_PATH "${RTKLIB_ROOT}/cmake")
# Setup python vars
set(SETUP_PY_IN "${RTKLIB_ROOT}/setup.py.in") # initial version of setup.py
set(SETUP_PY "${RTKLIB_BIN_DIR}/setup.py") # cmake generated setup.py
set(OUTPUT "${RTKLIB_BIN_DIR}/python_timestamp") # Timestamp used as dep
set(RTKLIB_PY "rtk_lib") # name of the python lib
# Set the output dir for SWIG
set(CMAKE_SWIG_OUTDIR ${RTKLIB_BIN_DIR}/${RTKLIB_PY})
# Generate Python bindings
##
# SWIG Config
SET_PROPERTY(SOURCE include/rtk_lib.i PROPERTY CPLUSPLUS ON)
SWIG_ADD_MODULE(${RTKLIB_PY} python include/rtk_lib.i) # Generate C-Python bindings
SWIG_LINK_LIBRARIES(${RTKLIB_PY} RTK ${PYTHON_LIBRARIES}) # Link the bindings with python
# Generate the setup.py file
configure_file(${SETUP_PY_IN} ${SETUP_PY})
# Build command that depends on the SWIG output files and updates the timestamp
add_custom_command(OUTPUT ${OUTPUT}
COMMAND ${PYTHON} ${SETUP_PY} build
COMMAND ${CMAKE_COMMAND} -E touch ${OUTPUT}
DEPENDS ${RTKLIB_BIN_DIR}\${SWIG_MODULE_${RTKLIB_PY}_REAL_NAME})
# Custom target that depends on the timestamp file generated by the custom command
add_custom_target(ALL DEPENDS ${OUTPUT})
# Install the shared library
install(TARGETS ${SWIG_MODULE_${RTKLIB_PY}_REAL_NAME}
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
# Install to user's packages
install(CODE "execute_process(COMMAND ${PYTHON} ${SETUP_PY} install --user)")
And here is my setup.py.in if its any help:
from distutils.core import setup
setup(name='rtk_lib',
version='${PACKAGE_VERSION}',
description="""Python bindings for rtk_lib, allowing for serial and
and file interfaces with RTK messages.""",
packages=['${RTKLIB_PY}'])
Quick Summary of the code: It generates wrapper classes for the C++ that are Python compatible, then it compiles and links the wrapper classes with the Python libs and the original RTK C++ library. After that you have a directory called rtk_lib which has both your wrapper classes and the rtk_lib.py module. Outside of this rtk_lib directory is the outputted _rtk_lib.so shared library that the rtk_lib.py relies on. So in order to get the bindings to work, I copy _rtk_lib.so in to that rtk_lib directory and call python3. Then I can import the lib and everything is great.
I try to install the shared lib, but even then I still get the same rtk_lib has no attribute blablabla().
Looks like an old question, but here goes anyway.
See this example swig_examples_cpp showing simple C++ functions wrapped by SWIG, using CMake and CLion to build it. The C version is here
Here's the full Python Cmake file:
project(python_example)
find_package(SWIG REQUIRED)
include(${SWIG_USE_FILE})
find_package(PythonLibs)
include_directories(${PYTHON_INCLUDE_PATH})
include_directories(${CMAKE_CURRENT_SOURCE_DIR})
set(CMAKE_SWIG_FLAGS "")
set_source_files_properties(../src/example.i PROPERTIES CPLUSPLUS ON)
swig_add_library(python_example
TYPE MODULE
LANGUAGE python
OUTPUT_DIR ../../py_out # move the .so to py_out
OUTFILE_DIR . # leave the .cpp in cmake-build-debug
SOURCES ../src/example.i
../src/example.cpp ../src/example.h
)
set_target_properties(python_example PROPERTIES
LIBRARY_OUTPUT_DIRECTORY ../../py_out # must match dir in OUTPUT_DIR
)
After building it, run python test.py to see it go. Note it's all in bash/Ubuntu, so MacOs should be ok, but windows may cause you some churn.
See the README for the full details.
I'm trying to compile the msgpack-python python module with gcc (v4.7) on solaris 10. The python installed is 2.6.8. Distutils is automatically picking up a incorrect compiler option (-xcode=pic32) that I want to remove from command.
The full command that distutils is putting together is:
/opt/csw/bin/gcc-4.7 -DNDEBUG -O -O2 -pipe -mcpu=v9 -I/opt/csw/include -xcode=pic32 -I/opt/csw/include/python2.6 -c msgpack/_msgpack.c -o build/temp.solaris-2.10-sun4v-2.6/msgpack/_msgpack.o
but produces this error:
gcc-4.7: error: language code=pic32 not recognized
then fails. If I remove that -xcode=pic32 option and manually execute the above command the module compiles successfully.
I need to be able to do this in an automated fashion though (using a buildfarm to produce the packages). The question is, Without modifying or changing the current python or distutils, is there a way to "remove" this option that distutils is picking up, so I can have the python setup.py process build the module appropriately (i.e. without the pic32 option)?
Thanks
Do not compile with that gcc. -xcode=pic32 is Sun Studio complier command line parameter. It will lead to linking problems too, even if you compile OK. Compile with SUN CoolTools gcc which can understand such parameter, or use Oracle Solaris Studio for SPARC.
Some hints:
GCC produce very slow code for SPARC, that's for why SUN created Cool Tools.
You haven't to remove -xcode=pic32, but change for -m32 -fpic, when you insist on gcc-4.7
To get mature setup of OSS tool I'm using pkgsrc compiling with Studio Express to particular CPU (-xtarget=native)
You may also find luck by setting the follow env vars:
export CC=$gcc_dir_path # Example: /usr/bin/gcc
export CXX=$gxx_dir_path # Example: /usr/bin/g++
export CFLAGS=''
export CPPFLAGS=''
export CXXFLAGS=''
export LDFLAGS=''
Note: There is a difference between unset env var, and set-as-empty env var. I had build bugs with Python packages when my *FLAGS env vars were unset. (Calling gcc with option -xO2 was the cause.) Setting as empty did the trick.
I am working on an application written in C. One part of the application should embed python and there is my current problem. I try to link my source to the Python library but it does not work.
As I use MinGW I have created the python26.a file from python26.lib with dlltool and put the *.a file in C:/Program Files (x86)/python/2.6/libs.
Therefore, I compile the file with this command:
gcc -shared -o mod_python.dll mod_python.o "-LC:\Program Files (x86)\python\2.6\libs" -lpython26 -Wl,--out-implib,libmod_python.a -Wl,--output-def,mod_python.def
and I get those errors:
Creating library file: libmod_python.a
mod_python.o: In function `module_init':
mod_python.c:34: undefined reference to `__imp__Py_Initialize'
mod_python.c:35: undefined reference to `__imp__PyEval_InitThreads'
... and so on ...
My Python "root" folder is C:\Program Files (x86)\python\2.6
The Devsystem is a Windows Server 2008
GCC Information: Reading specs from C:/Program Files (x86)/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs
Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug
Thread model: win32
gcc version 3.4.5 (mingw-vista special r3)
What I do wrong? How I get it compiled and linked :-)?
Cheers, gregor
Edit:
I forgot to write information about my Python installation: It's the official python.org installation 2.6.1
... and how I created the python.a file:
dlltool -z python.def --export-all-symbols -v c:\windows\system32\python26.dll
dlltool --dllname c:\Windows\system32\python26.dll --def python.def -v --output-lib python26.a
Well on Windows the python distribution comes already with a libpython26.a in the libs subdir so there is no need to generate .a files using dll tools.
I did try a little example with a single C file toto.c:
gcc -shared -o ./toto.dll ./toto.c -I/Python26/include/ -L/Python26/libs -lpython26
And it works like a charm. Hope it will help :-)
Python (at least my distribution) comes with a "python-config" program that automatically creates the correct compiler and linker options for various situations. However, I have never used it on Windows. Perhaps this tool can help you though?
IIRC, dlltool does not always work. Having python 2.6 + Wow makes things even more less likely to work. For numpy, here is how I did it. Basically, I use obdump.exe to build the table from the dll, which I parse to generate the .def. You should check whether your missing symbols are in the .def, or otherwise it won't work.