Checking dependencies with ldd on the *.so file produced by Cython, the dependencies contains myLib.o instead of libforcython.o
I do not grasp why it is trying to reach a myLib.o instead of libforcython as indicated in my setup.py .
During python execution of the module that produces an error similar to Cython unable to find shared object file . However contrary to the included links and answer, my problem does not seem to happen during the python initialization, but rather during the cythonization itself.
using these files:
example.pyx :
cdef extern from "myLib.h":
void testFunction ()
setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
pythoncallsc_extension = Extension (
name = "pythoncallsc",
sources=["./example.pyx"],
libraries=["forcython"])
setup ( name = "pythoncallsc",
ext_modules = cythonize ([pythoncallsc_extension]))
When I look at the log generated by python3 setup.py build_ext --inplace I can then clearly see in the commandline launching gcc, and it contains:
... -lforcython -lpython3.7m ...
So clearly gcc is linking against my library libforcython.
The lib contains:
header myLib.h
generated libforcython.so file for the function void testFunction(void).
This lib is built separately and elsewhere in my system. I have checked the include and lib and they are clearly in the $PATH of my cython project.
The cythonization produces the library pythoncallsc.cpython-37m-x86_64-linux-gnu.so
But against all my expectations when I do:
ldd pythoncallsc.cpython-37m-x86_64-linux-gnu.so
linux-vdso.so.1 (0x00007ffee40fb000)
myLib.o => not found <=== HERE ???
libpython3.7m.so.1.0 => /path/to/lib...
libpthread.so.0 => ...
libc.so.6 => ...
...
Why is cython producing an output that depends on a myLib.o file and not on libforcython.so ?
This myLib.o file does not exists, and consequently that produces an error when I launch my module:
`ImportError: myLib.o: cannot open shared object file: No such file or directory`
Any clue ?
In case we have this error, we should check how the c library is built.
As suggested in the comments, the libforcython library was not built properly.
It was wrongly built using soname with gcc in the Makefile.
Correct:
gcc myLib.o -shared -fPIC -Wl,-soname,libforcython.so -o libforcython.so -lc
Incorrect:
gcc -shared -fPIC -Wl,-soname,myLib.o -o libforcython.so myLib.o -lc
In this case soname is not really useful because I don't use a version number for the library: a simpler use case.
Related
The scikit-build distribution provides usage examples of FindF2PY and UseF2PY, but they are incomplete, only providing a partial CMakeLists.txt file without the other required files. Based on the documentation I have not been able to make something that builds.
Following the examples in the scikit-build documentation, I created the following files:
CMakeLists.txt:
cmake_minimum_required(VERSION 3.10.2)
project(skbuild_test)
enable_language(Fortran)
find_package(F2PY REQUIRED)
add_f2py_target(f2py_test f2py_test.f90)
add_library(f2py_test MODULE f2py_test.f90)
install(TARGETS f2py_test LIBRARY DESTINATION f2py_test)
setup.py:
import setuptools
from skbuild import setup
requires=['numpy']
setup(
name="skbuild-test",
version='0.0.1',
description='Performs line integrals through SAMI3 grids',
author='John Haiducek',
requires=requires,
packages=['f2py_test']
)
f2py_test.f90:
module mod_f2py_test
implicit none
contains
subroutine f2py_test(a,b,c)
real(kind=8), intent(in)::a,b
real(kind=8), intent(out)::c
end subroutine f2py_test
end module mod_f2py_test
In addition, I created a directory f2py_test containing an empty init.py.
The output from python setup.py develop shows that scikit-build invokes CMake and compiles my Fortran code. However, it fails to find Python.h while compiling the f2py wrapper code:
[2/7] Building C object CMakeFiles/_f2...kages/numpy/f2py/src/fortranobject.c.o
FAILED: CMakeFiles/_f2py_runtime_library.dir/venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -O3 -DNDEBUG -arch x86_64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk -mmacosx-version-min=10.14 -MD -MT CMakeFiles/_f2py_runtime_library.dir/venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c.o -MF CMakeFiles/_f2py_runtime_library.dir/venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c.o.d -o CMakeFiles/_f2py_runtime_library.dir/venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c.o -c ../../../venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c
In file included from ../../../venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c:2:
../../../venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.h:7:10: fatal error: 'Python.h' file not found
#include "Python.h"
^~~~~~~~~~
1 error generated.
First caveat, there may be a better way to do this, as I just figured out how to make scikit-build work by stumbling on your post and looking at documentation. Second caveat, I'm also learning cmake. So, there may be a better way.
There are a couple of things that you'll need to make your example here work. The big one is the second argument of add_f2py_target() isn't the source file. It is either the name of a pre-generated *.pyf, or to let f2py generate one, provide it an argument without the *.pyf extension. The other is adding the include directories for various components.
I made your example work with the following CMakeLists.txt:
cmake_minimum_required(VERSION 3.10.2)
project(skbuild_test)
enable_language(Fortran)
find_package(F2PY REQUIRED)
find_package(PythonLibs REQUIRED)
find_package(Python3 REQUIRED COMPONENTS NumPy)
#the following command either generates or points to an existing .pyf
#if provided an argument with .pyf extension, otherwise f2py generates one (not source code).
add_f2py_target(f2py_test f2py_test)
add_library(f2py_test MODULE f2py_test.f90)
include_directories(${PYTHON_INCLUDE_DIRS})
include_directories(${_Python3_NumPy_INCLUDE_DIR})
target_link_libraries(f2py_test ${PYTHON_LIBRARIES})
install(TARGETS f2py_test LIBRARY DESTINATION f2py_test)
The Problem
I've been learning the ins and outs of both Cython and the C API for Python and ran into a peculiar problem. I've created a C program that simply embeds a Python interpreter into it, imports a local .py file, and extracts a function/method from this file. I've been successful in calling a method from a localized file and returning the value to C, so I've gotten the process down to some degree.
I've been working on two different variants: 1) creating a standalone executable and 2) creating a shared library and allowing another executable to access the functions I've built in Python. For case 1, I have no issues. For case 2, I can only successfully make this work if there are no Python dependencies. If my Python function imports a Python core library or any library that is dynamically loaded (e.g. import math, import numpy, import ctypes, import matplotlib) an ImportError will be generated. The paths to these imports are typically in ../pathtopython/lib/pythonX.Y/lib-dynload for the python core libraries/modules.
Some Notes:
I'm using an Anaconda distribution for Python 3.7 on CentOS 7. I do not have administrative access, and there are system pythons that exist that might be interfering - I've demonstrated that I can make this work by using the system python but then I lose the ability to add packages. So I will highlight my findings from using the system python too.
My C Code:
struct results cfuncEval(double height, double diameter){
Py_Initialize();
PyObject *sys_path, *path;
sys_path = PySys_GetObject("path");
path = PyUnicode_DecodeFSDefault("path/to/my/python/file/");
PyList_Insert(sys_path, 0, path);
PyObject *pName, *pModule;
pName = PyUnicode_DecodeFSDefault("myModule");
pModule = PyImport_Import(pName);
PyErr_Print();
PyObject *pFunc = PyObject_GetAttrString(pModule, "cone_funcEval");
PyObject* args;
args = PyTuple_Pack(2,PyFloat_FromDouble(height),PyFloat_FromDouble(diameter));
PyObject* ret = PyObject_CallObject(pFunc, args);
I do things successfully with ret if there are no import statements in my .py file, however if I import a shared library as mentioned before (such as import math) I will get an error. It's fair to note that I can in fact successfully import other .py files from that .py file without error.
My myModule.py Code:
try:
import math
except ImportError as e:
print(e)
def cone_funcEval(diameter, height):
radius = diameter/2.0
volume = math.pi*radius*radius*height/3.0
slantHeight = ( radius**2.0 + height**2.0 )**0.5
surfaceArea = math.pi*radius*(radius + slantHeight)
return volume, slantHeight, surfaceArea
Please note that I know this logic is faster in C, but I am just trying to demonstrate creating the linkage and find importing core modules to be an expected feature. So, this is just my example problem. In this .py file, I trigger C to print my ImportError as follows:
/path/to/anaconda3/lib/python3.7/lib-dynload/math.cpython-37m-x86_64-linux-gnu.so: undefined symbol: PyArg_Parse
Indeed Python is not linked to the math .so if I run ldd on it, but I suppose that's to happen at runtime. I can make this work if I create a standalone executable using the -export-dynamic flag with gcc, but I cannot make it work otherwise as that flag appears to be not in use if you are compiling a shared library.
A Word About the System Python:
The system Python's core modules are located in /usr/lib64/pythonX.Y/lib-dynload. When I run ldd on /usr/lib64/pythonX.Y/lib-dynload/math.cpython-XYm-x86_64-linux-gnu.so it identifies the following:
linux-vdso.so.1 => (0x00007fffaf3ab000)
libm.so.6 => /usr/lib64/libm.so.6 (0x00007f2cd3aff000)
libpythonX.Ym.so.1.0 => /usr/lib64/libpythonX.Ym.so.1.0 (0x00007f2cd35da000)
libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x00007f2cd33be000)
libc.so.6 => /usr/lib64/libc.so.6 (0x00007f2cd2ff0000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2cd400d000)
libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007f2cd2dec000)
libutil.so.1 => /usr/lib64/libutil.so.1 (0x00007f2cd2be9000)
whereas when I run ldd on my anaconda3's math it produces the following:
linux-vdso.so.1 => (0x00007ffe1338a000)
libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x00007f1774dc3000)
libc.so.6 => /usr/lib64/libc.so.6 (0x00007f17749f5000)
libm.so.6 => /usr/lib64/libm.so.6 (0x00007f17746f3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f1774fdf000)
This, to me, indicates they're compiled somewhat differently. Most notably, the system Python is linked to its libpython.so, whereas my anaconda3 copy is not. I did attempt to compile a brand new version of python from source code using --enable-shared to replicate this to no avail.
My Compilation Flags:
CFLAGS as produced by python3-config --cflags
-I/path/to/anaconda3/include/python3.7m -Wno-unused-result -Wsign-compare -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O3 -ffunction-sections -pipe -isystem /path/to/anaconda3/include -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -flto -DNDEBUG -fwrapv -O3 -Wall
LDFLAGS
-L/path/to/anaconda3/lib/python3.7/config-3.7m-x86_64-linux-gnu -L/path/to/anaconda3/lib -lm -lrt -lutil -ldl -lpthread -lc -lpython3 -fno-lto
Ideal Solution:
I've searched all over SO and found no solutions that have worked. Ideally I would like to be able to take one of the following solutions:
Figure out what flags or steps I need to be able to make my shared library successfully pass the symbols coming from Python.h to the Python symbols needed by the libraries at runtime
Figure out how to properly compile a Python from source that looks like the system Python's (or an explanation as to why I can't do that)
A good lecturing as to how I have no clue what I'm doing
I have a C++ file called VBB.cpp which contains implementations of a few classes and I wrote Python bindings for those classes using the pybind11 library, these are located in bindings.cpp. I can successfully compile the code with:
g++ -O3 -Wall -shared -std=c++11 -fPIC `python3 -m pybind11 --includes` bindings.cpp VBB.cpp -o VBB`python3-config --extension-suffix
And then use the C++ code from Python with import library.
I want to turn this into a Python package via setuptools. I just used the example setup.py file available at https://github.com/pybind/python_example and modified the Extension call with
Extension(
'VBB',
['src/bindings.cpp', 'src/VBB.cpp'],
include_dirs=[
# Path to pybind11 headers
get_pybind_include(),
get_pybind_include(user=True)
],
language='c++'
),
If I run the install script it compiles but if I try to run import VBB in Python, I get the following error:
ImportError: dynamic module does not define module export function (PyInit_VBB)
I'm new to using setuptools so I'm not sure if I'm doing something wrong. The example package from GitHub works without any issues.
I'm trying to compile a Python wrapper to a small C++ library I've written. I've written the following setup.py script to try to use setuptools to compile the wrapper:
from setuptools import setup, Extension
import numpy as np
import os
atmcmodule = Extension(
'atmc',
include_dirs=[np.get_include(), '/usr/local/include'],
libraries=['mcopt', 'c++'], # my C++ library is at ./build/libmcopt.a
library_dirs=[os.path.abspath('./build')],
sources=['atmcmodule.cpp'],
language='c++',
extra_compile_args=['-std=c++11', '-v'],
)
setup(name='tracking',
version='0.1',
description='Particle tracking and MC optimizer module',
ext_modules=[atmcmodule],
)
However, when I run python setup.py build on OS X El Capitan, clang complains about not finding some C++ standard library headers:
In file included from atmcmodule.cpp:7:
In file included from ./mcopt.h:11:
In file included from ./arma_include.h:4:
/usr/local/include/armadillo:54:12: fatal error: 'initializer_list' file not found
#include <initializer_list>
^
1 error generated.
error: command 'gcc' failed with exit status 1
Passing the -v flag to the compiler shows that it is searching the following include paths:
#include <...> search starts here:
/Users/[username]/miniconda3/include
/Users/[username]/miniconda3/lib/python3.4/site-packages/numpy/core/include
/usr/local/include
/Users/[username]/miniconda3/include/python3.4m
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/c++/4.2.1
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/c++/4.2.1/backward
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.0.0/include
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/System/Library/Frameworks (framework directory)
End of search list.
This apparently doesn't include the path to the C++ standard library headers. If I compile a small test C++ source with the -v option, I can see that clang++ normally also searches the path /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1, and if I include this path in the include_dirs option for Extension in my setup.py script, then the extension module compiles correctly and works. However, hard-coding this path into the script doesn't seem like a good solution since this module also needs to work on Linux.
So, my question is how do I properly make setuptools include the required headers?
Update (11/22/2015)
As setuptools tries to compile the extension, it prints the first command it's running:
gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/[username]/miniconda3/include -arch x86_64 -I/Users/[username]/miniconda3/lib/python3.4/site-packages/numpy/core/include -I/Users/[username]/Documents/Code/ar40-aug15/monte_carlo/mcopt -I/usr/local/include -I/Users/[username]/miniconda3/include/python3.4m -c /Users/[username]/Documents/Code/ar40-aug15/monte_carlo/atmc/atmcmodule.cpp -o build/temp.macosx-10.5-x86_64-3.4/Users/[username]/Documents/Code/ar40-aug15/monte_carlo/atmc/atmcmodule.o -std=c++11 -fopenmp -v
If I paste this command into a terminal and run it myself, the extension compiles successfully. So I suspect either setuptools is modifying some environment variables I'm not aware of, or it's lying a little about the commands it's actually running.
Setuptools tries to compile C/C++ extension modules with the same flags used to compile the Python interpreter. After checking the flags used to compile my Python install (from Anaconda), I found it was compiling for a minimum Mac OS X version of 10.5. This seems to make it use the GCC libstdc++ instead of clang's libc++ (which supports C++11).
This can be fixed by either setting the environment variable MACOSX_DEPLOYMENT_TARGET to 10.9 (or later), or adding '-mmacosx-version-min=10.9' to extra_compile_args.
I have a Python2.6 program that can load Python modules compiled to .so files using Cython. I used Cython to compile the .py modules to .so files and everything works fine.
This is the setup.py file I use with Cython:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
ext_modules = [
Extension("ldap", ["ldap.pyx"]),
Extension("checker", ["checker.pyx"]),
Extension("finder", ["finder.pyx"]),
Extension("utils", ["utils.pyx"]),
]
setup(
name = 'bchecker',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules
)
So I know I can compile Python modules using Cython (I guess Cython creates 'C' files from my Python files and then compiles them), but can I compile my main Python program to something I can execute on a Linux platform? If so, a Cython command line example would be appreciated. Thanks.
Contrary to what Adam Matan and others assert, you can in fact create a single executable binary file using Cython, from a pure Python (.py) file.
Yes, Cython is intended to be used as stated - as a way of simplifying writing C/C++ extension modules for the CPython python runtime.
But, as nudzo alludes to in this comment, you can use the --embed switch at the command line prompt.
Here is an extremely simple example. I am peforming this from a Debian Sid workstation, using python3 and cython3..
Make sure you have python-dev or python3-dev packages installed beforehand.
1) Create a very simple Python program called hello.py
$ cat hello.py
print("Hello World!")
2) Use Cython to compile your python program into C...
cython3 --embed -o hello.c hello.py
3) Use GCC to compile hello.c into an executable file called hello...
gcc -Os -I /usr/include/python3.3m -o hello hello.c -lpython3.3m -lpthread -lm -lutil -ldl
4) You end up with a file called hello ...
$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV),
dynamically linked (uses shared libs), for GNU/Linux 2.6.32,
BuildID[sha1]=006f45195a26f1949c6ed051df9cbd4433e1ac23, not stripped
$ ldd hello
linux-vdso.so.1 (0x00007fff273fe000)
libpython3.3m.so.1.0 => /usr/lib/x86_64-linux-gnu/libpython3.3m.so.1.0 (0x00007fc61dc2c000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fc61da0f000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fc61d70b000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007fc61d508000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fc61d304000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc61cf5a000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fc61cd52000)
libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007fc61cb28000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fc61c90f000)
/lib64/ld-linux-x86-64.so.2 (0x00007fc61e280000)
In this case, the executable is dynamically linked to Python 3.3 on my Debian system.
5) run hello...
$ ./hello
Hello World!
As you can see, using this method you can basically use Cython to convert your pure Python applications into executable, compiled object code.
I am using this method for vastly more complex applications - for example, a full blown Python/PySide/Qt application.
For different versions of Python, you tailor the gcc -I and -l switches to suit.
You can then package the executable as a distribution (.deb, etc.) file, without having to package the Python/PySide/Qt files - the advantage being that your application should still be able to run even after a distribution update to the same versions of Python, etc. on that distribution.
Take a look at the answers to Can Cython compile to an EXE? which, contrary to all the other answers here say that yes, it is possible to compile to an executable.
The links at Embedding Cython seem to be a good place to start, but it's not Cython's primary purpose so I don't know how straightforward it would be.
I don't know if this will help or not but Nudzo is correct. You can get it with cython --embed -o main.o main.py and then I try to compile the result with cl/EHsc
Look at this Post:
cython <cython_file> --embed
and then just
gcc <C_file_from_cython> -I<include_directory> -L<directory_containing_libpython> -l<name_of_libpython_without_lib_on_the_front> -o <output_file_name>
here an example:
cython3 main.py --embed
gcc main.c -I /usr/include/python3.8/ -L /lib/x86_64-linux-gnu/ -l python3.8 -o main
You can't, Cython is not made to compile Python nor turn it into an executable.
To produce an .exe file, use py2exe.
To produce a package for Mac or Linux, use the regulard packaging system process as there is nothing specific about a script language program in Unix env.