I am in the process of developing a Python extension to allow a script running on a Raspberry Pi to control a sensor. The sensor manufacturer provided my organization with the source code to their C API, and I am trying to create a set of bindings to make the sensor accessible in Python scrips.
The makefile that came with the API source created a set of object files that I then linked together into a library (libvl53l1.a) using the command:
ar -cvq libvl53l1.a *.o
I then added this library to the setup.py script o my extension by adding this flag:
extra_compile_args=["-l:libvl53l1.a"]
The code, library, and setup.py script are currently in the same directory for convenience. Installing the library into Python using the command (python3 setup.py build_ext --inplace) runs without errors, however, when I try to import my library in a Python interpreter, the import fails because of an undefined symbol "VL53L1_WaitDeviceBooted" in the extension's .so file. Listing the symbols in libvl54l1.a:
nm libvl53l1.a | grep "VL53L1_WaitDeviceBooted"
shows that the library does expose a symbol of this name. Therefore, I believe that the linker is failing to link the extension with this static library. Is there a step I am missing that is causing this? I have also tried removing the .a extension as recommended in the Python documentation, to no avail.
Thank you
extra_compile_args=["-l:libvl53l1.a"]
This setting adds -l:... to the compilation command, but the compiler ignores that option, because it's a linking option, and no linking is done by the compiler.
You want: extra_link_args=["-lvl53l1"], which would add -lvl53l1 to the link command (the linker wouldn't ignore that option while performing the linking).
Related
I found issues on installing a cross compiler on my Ubuntu19.04 64 bit machine. I would like to cross compile python code into executable for my raspberry pi 3 model b+ running Debian Stretch.
I followed many guides, none of them worked. I am actually following https://github.com/Yadoms/yadoms/wiki/Cross-compile-for-raspberry-PI
I followed step of the above written guide:
- Setup Environment
- Install cross compiler
- Boost 1.64
- Python
On the last part (Python) it fails to execute the last instruction.
$ CC=arm-linux-gnueabihf-gcc CXX=arm-linux-gnueabihf-g++ AR=arm-linux-gnueabihf-ar RANLIB=arm-linux-gnueabihf-ranlib ./configure --host=arm-linux-gnueabihf --target=arm-linux-gnueabihf --build=x86_64-linux-gnu --prefix=$HOME/Desktop/rapsberry/depsBuild/python --disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no ac_cv_have_long_long_format=yes --enable-shared
output:
checking build system type... x86_64-pc-linux-gnu
checking host system type... arm-unknown-linux-gnueabihf
checking for python3.7... python3.7
checking for python interpreter for cross build... python3.7
checking for --enable-universalsdk... no
checking for --with-universal-archs... no
checking MACHDEP... checking for --without-gcc... no
checking for --with-icc... no
checking for arm-linux-gnueabihf-gcc... arm-linux-gnueabihf-gcc
checking whether the C compiler works... no
configure: error: in `/home/slr/Desktop/raspberry/boost_1_64_0/Python-3.7.5':
configure: error: C compiler cannot create executables
See `config.log' for more details
and then:
$ make HOSTPYTHON=$HOME/Desktop/raspberry/depsBuild/pythonhost/python HOSTPGEN=$HOME/Desktop/raspberry/depsBuild/pythonhost/Parser/pgen BLDSHARED="arm-linux-gnueabihf-gcc -shared" CROSS-COMPILE=arm-linux-gnueabihf- CROSS_COMPILE_TARGET=yes HOSTARCH=arm-linux BUILDARCH=arm-linux-gnueabihf
output:
make: *** No targets specified and no makefile found. Stop.
I need python3 for my purpose.
I am really stuck in this problem, could someone have some idea? I tried also with QEMU and Docker (https://raspberrypi.stackexchange.com/questions/109488/building-a-virtual-machine-with-the-img-file-of-the-raspberry-pi-stretch) and both of them failed in compiling my target code:
gcc: internal compiler error
My code is pretty long ( Some thousand of lines ), while small codes successfully works. Thanks in advice.
There seems to be something wrong with the toolchain you're using, or with the way you're invoking the Python configure script.
Either way, it's impossible to debug without seeing your exact setup, so I'll start from scratch here.
I'm in the process of documenting a similar project here: Raspberry Pi C++ development.
Toolchain
The toolchains in the raspberrypi/tools repository are pretty old. I usually just build a new one using Crosstool-NG (which is what the raspberrypi/tools toolchains were built with as well, IIRC).
I used the armv8-rpi3-linux-gnueabihf sample.
You can of course build it yourself, but this can take quite some time, so you can also download the one I built from Docker Hub (see later).
You can find more information about how it was built here: Detailed information about configuring and building toolchains
.
Compiling Python for the build system
In order to cross-compile a Python module, you need the same version of Python twice: once for your build system (the computer you're building on) and once for the host system (the Raspberry Pi you're building for).
Both will be compiled from source, to ensure that they are exactly the same version.
The build Python will just be used for cross-compiling the host Python, so we won't enable optimizations and optional modules.
This is the script I used: python-build.sh
You need OpenSSL, Zlib and libffi for pip to work. I built them from source as well, but you could of course install them using your package manager (you need the -dev versions).
Again, you can find the install scripts I used here.
Cross-compiling Python for the host system
Before you can cross-compile Python for the Raspberry Pi, you'll have to cross-compile its dependencies. A detailed explanation can be found here: Cross-compiling the dependencies.
You can find the scripts in the same folder on GitHub I linked to earlier, for instance, python.sh.
There are some caveats when cross-compiling, you need a pkg-config with the right prefix that looks for the required libraries in the sysroot of your cross-compilation, instead of in your build system's library folders. You also have to specify the include directories and library folders when calling the configure script.
All of this is handled by this Dockerfile and the scripts in the same folder.
Crossenv
The easiest way to cross-compile Python modules is to use Crossenv. The instructions can be found in the README of the GitHub page.
When everything is set up, you can run python setup.py bdist_wheel.
Example
As an example, you can follow these steps to compile a simple Python module using Cython:
1. Pull the toolchain and the cross-compiled Python from Docker hub
This is an image that contains the cross-compilation toolchain, native and cross-compiled Python, as well as Crossenv.
docker pull tttapa/rpi3-armv8-python-opencv-cross
2. Create the Python file to compile and setup.py to build it
Create these two files:
helloworld.py
print("Hello, World")
setup.py
from setuptools import setup
from Cython.Build import cythonize
from sys import version_info
setup(
name='helloworld',
version='0.0.1',
ext_modules = cythonize("helloworld.py", language_level=version_info[0])
)
3. Create a script that builds the Python module
This script will be run inside of the Docker container.
build.sh
#!/usr/bin/env bash
set -e
cd /tmp/workdir
source "$HOME/crossenv/bin/activate"
build-pip install cython
cross-pip install wheel
python setup.py bdist_wheel
4. Start the Docker container and build the module
Run the following command in the same folder where you created the three files.
docker run --rm -v "$PWD:/tmp/workdir" \
tttapa/rpi3-armv8-python-opencv-cross \
bash "/tmp/workdir/build.sh"
When the build is complete, you'll find a file dist/helloworld-0.0.1-cp38-cp38-linux_arm.whl, which you can install using pip install helloworld-0.0.1-cp38-cp38-linux_arm.whl on the Raspberry Py. You'll also find the helloworld.c file generated by Cython.
To be able to run it, you'll probably have to install the cross-compiled libraries and Python itself to the RPi. You can do this by copying everything inside of ~/RPi-staging (inside of the Docker container) to the Pi's root folder /.
I have a python module which contains a C++ extension as well a shared library on which the C++ extension depends. The shared library is picked up by setuptools as an extra_object on the extension. After running python setup.py bdist_wheel, the module wheel object gets properly generated has a directory structure as follows:
+-pymodule.whl
| +-pymodule
| +-pymodule-version.dist-info
| +-extension.so
| +-shared_lib.so
To install this wheel, in my python environment I call pip install pymodule.whl which copies the python sources as well as the .so files into the environment's site-packages directory.
After installing the module, one can then attempt to import the module by calling import pymodule in a terminal for the environment. This triggers an exception to be thrown:
ImportError: shared_lib.so: cannot open shared object file: No such file or directory
This exception can be resolved by appending the appropriate site-packages directory to the LD_LIBRARY_PATH variable; however, it seems that this should work out of the box, especially considering that python is clearly able to locate extension.so.
Is there a way to force python to locate this shared library without having to explicitly point LD_LIBRARY_PATH at the installation location(i.e. site-packages)?
This question works around a similar problem by using package data and explicitly specifying an install location for the shared library. The issue I have with this approach is that the shared object is decoupled from the extension. In my case, the shared library and extension are both targets built by the same cmake build. I had previously attempted to use skbuild to build cmake based extentions; however, as per this issue, there is a similar issue in skbuild with including other libraries generated as part of the extension build.
We wish to create an R package that wraps the Python runtime and is 'dependency free' (e.g. one need not install Python for the R package to work). There is already a package that allows R to call Python (CRAN package rPython) but it requires that Python be installed on the target machine. We would like all dependencies to be installed when the envisioned package is installed via the standard R package install mechanism. I forked rPython and created this variation: https://github.com/brucehoff/rWithPython. The package works on Unix and Mac. It downloads and builds the Python source and then accesses the Python runtime from R. By using a static version of libpython it builds a shared object that can be installed on another machine without leaving any dependencies behind. The problem is how to get it to work on Windows. On windows, R packages are built using "R Tools" (https://cran.r-project.org/bin/windows/Rtools/) which uses the cygwin/mingw stack. I tried running configure/make as on Unix, but it fails. I then tried linking a static libpython that I built on a Windows box using Visual Studio. Python is very helpful, providing instructions for changing the MVS build to create a static library:
PCBuild\readme.txt says, in part:
The solution has no configuration for static libraries. However it is
easy to build a static library instead of a DLL. You simply have to set
the "Configuration Type" to "Static Library (.lib)" and alter the
preprocessor macro "Py_ENABLE_SHARED" to "Py_NO_ENABLE_SHARED". You may
also have to change the "Runtime Library" from "Multi-threaded DLL
(/MD)" to "Multi-threaded (/MT)".
This works great and I get a static library for python. However when I try to link the library under cygwin/rtools the linker gives an error:
gcc -m32 -I"C:/bin/R/include" -DNDEBUG -I"d:/RCompile/CRANpkg/extralibs64/local/include" -I"C:/Python35/Include" -I"C:/Python35/PC" -O3 -Wall -std=gnu99 -mtune=core2 -c pycall.c -o pycall.o
gcc -m32 -shared -s -static-libgcc -o rWithPython.dll tmp.def pycall.o -LC:/Python35/PCbuild/win32 -lpython35 -Ld:/RCompile/CRANpkg/extralibs64/local/lib/i386 -Ld:/RCompile/CRANpkg/extralibs64/local/lib -LC:/bin/R/bin/i386 -lR
c:/rtools/gcc-4.6.3/bin/../lib/gcc/i686-w64-mingw32/4.6.3/../../../../i686-w64-mingw32/bin/ld.exe: C:/Python35/PCbuild/win32/libpython35.a(C:/hgpy/cpython/PCbuild/obj//win32_Release/pythoncore/getbuildinfo.obj): Recognised but unhandled machine type (0x14c) in Import Library Format archive
pycall.o:pycall.c:(.text+0x5): undefined reference to '_imp__Py_Initialize'
pycall.o:pycall.c:(.text+0x1a): undefined reference to '_imp__PyRun_SimpleStringFlags'
pycall.o:pycall.c:(.text+0x31): undefined reference to '_imp__Py_Finalize'
pycall.o:pycall.c:(.text+0x56): undefined reference to '_imp__PyRun_SimpleStringFlags'
pycall.o:pycall.c:(.text+0x8c): undefined reference to '_imp__PyImport_AddModule'
pycall.o:pycall.c:(.text+0x95): undefined reference to '_imp__PyModule_GetDict'
pycall.o:pycall.c:(.text+0xa8): undefined reference to '_imp__PyDict_GetItemString'
pycall.o:pycall.c:(.text+0xb5): undefined reference to '_imp__PyUnicode_AsUTF8String'
collect2: ld returned 1 exit status
no DLL was created
ERROR: compilation failed for package 'rWithPython'
From what I've read "machine type" 0x14c is "Intel 386 or later, and compatible processors", i.e. the most common/expected machine type. So I'm guessing the error is a red herring, the problem is an incompatibility between compilers, not machines.
Any suggestions on how to proceed are appreciated!!
---- UPDATE ----
I verified that I can build/link (1) when linking in the library(ies) that are part of the standard Windows Python installation and (2) when building from source using MS Visual Studio, without modifying the build in any way. The problem is replicated as soon as I modify the Visual Studio build settings to produce a static library (python35.lib) following the guidelines in PCbuild\readme.txt. Those guidelines are a bit ambiguous. I will probe further, but if anyone has had success generating a static Python library in MS VS, please let me know!
---- ANOTHER UPDATE ----
We have a solution to this question:
https://github.com/Sage-Bionetworks/PythonEmbedInR
(The code is open source, under the GPL-3 license.)
You can see how we solved the problem of running on Windows:
https://github.com/Sage-Bionetworks/PythonEmbedInR/blob/master/configure.win
In short we do not try to compile from source or make a static library, rather we use Python's 'embeddable zip' and make sure its libraries are on the search path for our application. It seems to work great!
You need to put the Py_NO_ENABLE_SHARED define in your application code as well. Basically, anywhere the Python.h file is included. If not present, the include file assumes you have linked with a .dll and acts accordingly (and incorrectly.)
I want to use the Cantera library in python. I have been using it for C++ and I am linking my adding these couple lines to my makefile:
CANT_LIB = $HOME/usr/local/Cantera201/lib/
CANT_INC = $HOME/usr/local/Cantera201/include/ -I $HOME/usr/local/Cantera201/include/cantera \
with CANT_LIB and CANT_INC being called when compiling.
I have very limited experience with python. Is there an equivalent to linking libraries in python? I have tried adding the cantera path to PYTHONPATH but it did not work. I am working on a Linux server on which I do not have access to super user and python 2.6.6.
You need to install Cantera's Python module to use it, the raw C/C++ libraries aren't enough. If you install using the directions on their website it should be installed to the appropriate Python site-packages directory automatically, and available for use with just import cantera.
I've used python-for_android to create a kivy based application running on android.
Some parts of my application have been optimized in c++ using cython.
I manage to compile all my code using python for android and a custom recipes.
My code also works perfectly with kivy under linux.
But on my android device, it failed to load some c++ function. For instance, I get the message :
ImportError: Cannot load library: reloc_library[1307]: 1839 cannot locate '_ZNSt9basic_iosIcSt11char_traitsIcEE4initEPSt15basic_streambufIcS1_E'...
Any idea ?
Thanks
I've finally managed to make my code work using C++ under android.
There were two difficulties :
1 - Access to c++ header from the arm environment created by push_arm. I had to add the correct includes in my recipe, and modify the default CXX var :
#dirty hack
export C_INCLUDE="-I$ANDROIDNDK/sources/cxx-stl/gnu-libstdc++/$TOOLCHAIN_VERSION/include/ -I$ANDROIDNDK/sources/cxx-stl/gnu-libstdc++/$TOOLCHAIN_VERSION/libs/armeabi/include/"
export OLD_BOUBOU=$CC
export CC="$CXX $C_INCLUDE"
try $BUILD_PATH/python-install/bin/python.host setup.py install -O2
#try cp libgnustl_shared.so $LIBS_PATH/
try cp $ANDROIDNDK/sources/cxx-stl/gnu-libstdc++/4.4.3/libs/armeabi/libgnustl_shared.so $LIBS_PATH/
export CC=$OLD_BOUBOU
2 - Find the shared library containing the libstl functions, and load it. This was the harder part :
After some research, I discover that stl functions are stored in libgnustl_shared.so, and not listdc++.so. So you have to embed this library in your apk.
This is the purpose of the line try cp $ANDROIDNDK/sources/cxx-stl/gnu-libstdc++/4.4.3/libs/armeabi/libgnustl_shared.so $LIBS_PATH/
Then, you have to load it. I've modified :
src/src/org/renpy/android/PythonActivity.java
src/src/org/renpy/android/PythonService.java
by adding this line after the others System.loadLibrary() :
System.loadLibrary("gnustl_shared");
I am currently trying to build pybox2d (with swig) via python-for-android.
The build seems to be fine unit i try to import Box2D ( from the app on the actual android device) i get a
"cannot locate symbol __cxa_end_cleanup".
Unfortunately the above fix doies not help.
Any other ideas?
Update:
I could fix all issues.
I had to link against stlport_shared.
All my work is in my fork of https://github.com/DerThorsten/python-for-android/ . It works with newer ndks then the orginal python-for-android.
And it has Box2D.