When trying to port a Python code to cython, I get the following linker error message:
cls ~/workspace/Prototypes/PLPcython $ python3 setup.py build_ext --inplace
running build_ext
cythoning src/graph.pyx to src/graph.c
cythoning src/community.pyx to src/community.c
building 'PLPcython' extension
creating build
creating build/temp.macosx-10.8-x86_64-3.3
creating build/temp.macosx-10.8-x86_64-3.3/src
cc -Wno-unused-result -fno-common -dynamic -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I/usr/local/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python3/3.3.0/Frameworks/Python.framework/Versions/3.3/include/python3.3m -c src/graph.c -o build/temp.macosx-10.8-x86_64-3.3/src/graph.o
cc -Wno-unused-result -fno-common -dynamic -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I/usr/local/include -I/usr/local/opt/sqlite/include -I/usr/local/Cellar/python3/3.3.0/Frameworks/Python.framework/Versions/3.3/include/python3.3m -c src/community.c -o build/temp.macosx-10.8-x86_64-3.3/src/community.o
src/community.c:1414:19: warning: expression result unused [-Wunused-value]
PyObject_INIT(o, t);
~~~~~~~~~~~~~~^~~~~
/usr/local/Cellar/python3/3.3.0/Frameworks/Python.framework/Versions/3.3/include/python3.3m/objimpl.h:163:69: note: expanded from macro 'PyObject_INIT'
( Py_TYPE(op) = (typeobj), _Py_NewReference((PyObject *)(op)), (op) )
^
1 warning generated.
cc -bundle -undefined dynamic_lookup -L/usr/local/lib -L/usr/local/opt/sqlite/lib build/temp.macosx-10.8-x86_64-3.3/src/graph.o build/temp.macosx-10.8-x86_64-3.3/src/community.o -o /Users/cls/workspace/Prototypes/PLPcython/PLPcython.so
duplicate symbol _PyInit_PLPcython in:
build/temp.macosx-10.8-x86_64-3.3/src/graph.o
build/temp.macosx-10.8-x86_64-3.3/src/community.o
duplicate symbol ___pyx_module_is_main_PLPcython in:
build/temp.macosx-10.8-x86_64-3.3/src/graph.o
build/temp.macosx-10.8-x86_64-3.3/src/community.o
ld: 2 duplicate symbols for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'cc' failed with exit status 1
Apparently duplicate symbols are produced. What is _PyInit_* and ___pyx_module_is_main_?
These are the two source files I try to cythonize: graph.pyx
class Graph:
def __init__(self, n=0):
self.n = n
self.m = 0
self.z = n # max node id
self.adja = [[] for i in range(self.z)]
self.deg = [0 for i in range(self.z)]
def maxNodeId(self):
return self.z
def numberOfNodes(self):
return self.n
def numberOfEdges(self):
return self.m
def addEdge(self, u, v):
if (u == v):
self.adja[u].append(v)
self.deg[u] += 1
else:
self.adja[u].append(v)
self.adja[v].append(u)
self.deg[u] += 1
self.deg[v] += 1
self.m += 1
def hasEdge(self, u, v):
for w in self.adja[u]:
if w == v:
return True
return False
def degree(self, u):
return self.deg[u]
def forNodes(self, handle):
# assumtion: all nodes exist
for u in range(self.z):
handle(u)
def forEdges(self, handle):
for u in range(self.z):
for v in self.adja[u]:
if v <= u:
handle(u, v)
def forNeighborsOf(self, u, handle):
for v in self.adja[u]:
handle(v)
and community.pyx
def numberOfCommunities(zeta, G):
labels = set()
for label in zeta:
if label is not None:
labels.add(label)
return len(labels)
def coverage(zeta, G):
intra = 0
inter = 0
m = G.numberOfEdges()
def scan(u, v):
nonlocal intra
nonlocal inter
if zeta[u] == zeta[v]:
intra += 1
else:
inter += 1
G.forEdges(scan)
print("intra-community edges: ", intra)
print("inter-community edges: ",inter)
assert (inter + intra == m)
coverage = intra / m
return coverage
I believe Cython only supports the compilation of a single source file to a single module. So either you compile your two files as two separate modules or you use the include statement (http://docs.cython.org/src/userguide/language_basics.html#the-include-statement) to combine them in a single source file.
Related
New at this so please bare with me. Following these instructions trying update to the latest version of Python
https://www.vultr.com/docs/update-python3-on-centos/
I get all the way to step 2.5 and get the following:
[user1#localhost Python-3.9.6]$ ./configure --enable-optimizations
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking for python3.9... no
checking for python3... python3
checking for --enable-universalsdk... no
checking for --with-universal-archs... no
checking MACHDEP... "linux"
checking for gcc... gcc
checking whether the C compiler works... no
configure: error: in `/home/user1/Python-3.9.6':
configure: error: C compiler cannot create executables
See `config.log' for more details
My config.log file
## --------- ##
## Platform. ##
## --------- ##
hostname = localhost.localdomain
uname -m = x86_64
uname -r = 3.10.0-1160.76.1.el7.x86_64
uname -s = Linux
uname -v = #1 SMP Wed Aug 10 16:21:17 UTC 2022
/usr/bin/uname -p = x86_64
/bin/uname -X = unknown
/bin/arch = x86_64
/usr/bin/arch -k = unknown
/usr/convex/getsysinfo = unknown
/usr/bin/hostinfo = unknown
/bin/machine = unknown
/usr/bin/oslevel = unknown
/bin/universe = unknown
PATH: /opt/rh/devtoolset-8/root/usr/bin
PATH: /usr/local/bin
PATH: /usr/bin
PATH: /usr/local/sbin
PATH: /usr/sbin
PATH: /home/user1/.local/bin
PATH: /home/user1/bin
## ----------- ##
## Core tests. ##
## ----------- ##
configure:2848: checking build system type
configure:2862: result: x86_64-pc-linux-gnu
configure:2882: checking host system type
configure:2895: result: x86_64-pc-linux-gnu
configure:2925: checking for python3.9
configure:2955: result: no
configure:2925: checking for python3
configure:2941: found /usr/bin/python3
configure:2952: result: python3
configure:3046: checking for --enable-universalsdk
configure:3093: result: no
configure:3117: checking for --with-universal-archs
configure:3132: result: no
configure:3288: checking MACHDEP
configure:3339: result: "linux"
configure:3633: checking for gcc
configure:3649: found /opt/rh/devtoolset-8/root/usr/bin/gcc
configure:3660: result: gcc
configure:3889: checking for C compiler version
configure:3898: gcc --version >&5
gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
configure:3909: $? = 0
configure:3898: gcc -v >&5
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,lto --prefix=/opt/rh/devtoolset-8/root/usr --mandir=/opt/rh/devtoolset-8/root/usr/share/man --infodir=/opt/rh/devtoolset-8/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --with-linker-hash-style=gnu --with-default-libstdcxx-abi=gcc4-compatible --enable-plugin --enable-initfini-array --with-isl=/builddir/build/BUILD/gcc-8.3.1-20190311/obj-x86_64-redhat-linux/isl-install --disable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=x86-64 --build=x86_64-redhat-linux
Thread model: posix
gcc version 8.3.1 20190311 (Red Hat 8.3.1-3) (GCC)
configure:3909: $? = 0
configure:3898: gcc -V >&5
gcc: error: unrecognized command line option '-V'
gcc: fatal error: no input files
compilation terminated.
configure:3909: $? = 1
configure:3898: gcc -qversion >&5
gcc: error: unrecognized command line option '-qversion'; did you mean '--version'?
gcc: fatal error: no input files
compilation terminated.
configure:3909: $? = 1
configure:3929: checking whether the C compiler works
configure:3951: gcc conftest.c >&5
/lib/../lib64/crt1.o: file not recognized: File truncated
collect2: error: ld returned 1 exit status
configure:3955: $? = 1
configure:3993: result: no
configure: failed program was:
| /* confdefs.h */
| #define _GNU_SOURCE 1
| #define _NETBSD_SOURCE 1
| #define __BSD_VISIBLE 1
| #define _DARWIN_C_SOURCE 1
| #define _PYTHONFRAMEWORK ""
| #define _XOPEN_SOURCE 700
| #define _XOPEN_SOURCE_EXTENDED 1
| #define _POSIX_C_SOURCE 200809L
| /* end confdefs.h. */
|
| int
| main ()
| {
|
| ;
| return 0;
| }
configure:3998: error: in `/home/user1/Python-3.9.6':
configure:4000: error: C compiler cannot create executables
See `config.log' for more details
## ---------------- ##
## Cache variables. ##
## ---------------- ##
ac_cv_build=x86_64-pc-linux-gnu
ac_cv_env_CC_set=
ac_cv_env_CC_value=
ac_cv_env_CFLAGS_set=
ac_cv_env_CFLAGS_value=
ac_cv_env_CPPFLAGS_set=
ac_cv_env_CPPFLAGS_value=
ac_cv_env_CPP_set=
ac_cv_env_CPP_value=
ac_cv_env_LDFLAGS_set=
ac_cv_env_LDFLAGS_value=
ac_cv_env_LIBS_set=
ac_cv_env_LIBS_value=
ac_cv_env_MACHDEP_set=
ac_cv_env_MACHDEP_value=
ac_cv_env_PKG_CONFIG_LIBDIR_set=
ac_cv_env_PKG_CONFIG_LIBDIR_value=
ac_cv_env_PKG_CONFIG_PATH_set=set
ac_cv_env_PKG_CONFIG_PATH_value=/opt/rh/devtoolset-8/root/usr/lib64/pkgconfig
ac_cv_env_PKG_CONFIG_set=
ac_cv_env_PKG_CONFIG_value=
ac_cv_env_PROFILE_TASK_set=
ac_cv_env_PROFILE_TASK_value=
ac_cv_env_build_alias_set=
ac_cv_env_build_alias_value=
ac_cv_env_host_alias_set=
ac_cv_env_host_alias_value=
ac_cv_env_target_alias_set=
ac_cv_env_target_alias_value=
ac_cv_host=x86_64-pc-linux-gnu
ac_cv_prog_PYTHON_FOR_REGEN=python3
ac_cv_prog_ac_ct_CC=gcc
## ----------------- ##
## Output variables. ##
## ----------------- ##
ABIFLAGS=''
ALT_SOABI=''
AR=''
ARCH_RUN_32BIT=''
ARFLAGS=''
BASECFLAGS=''
BASECPPFLAGS=''
BINLIBDEST=''
BLDLIBRARY=''
BLDSHARED=''
BUILDEXEEXT=''
CC='gcc'
CCSHARED=''
CFLAGS=''
CFLAGSFORSHARED=''
CFLAGS_ALIASING=''
CFLAGS_NODIST=''
CONFIGURE_MACOSX_DEPLOYMENT_TARGET=''
CONFIG_ARGS=' '\''--enable-optimizations'\'' '\''PKG_CONFIG_PATH=/opt/rh/devtoolset-8/root/usr/lib64/pkgconfig'\'''
CPP=''
CPPFLAGS=''
CXX=''
DEFS=''
DEF_MAKE_ALL_RULE=''
DEF_MAKE_RULE=''
DFLAGS=''
DLINCLDIR=''
DLLLIBRARY=''
DTRACE=''
DTRACE_HEADERS=''
DTRACE_OBJS=''
DYNLOADFILE=''
ECHO_C=''
ECHO_N='-n'
ECHO_T=''
EGREP=''
ENSUREPIP=''
EXEEXT=''
EXPORTSFROM=''
EXPORTSYMS=''
EXPORT_MACOSX_DEPLOYMENT_TARGET='#'
EXT_SUFFIX=''
FRAMEWORKALTINSTALLFIRST=''
FRAMEWORKALTINSTALLLAST=''
FRAMEWORKINSTALLAPPSPREFIX=''
FRAMEWORKINSTALLFIRST=''
FRAMEWORKINSTALLLAST=''
FRAMEWORKPYTHONW=''
FRAMEWORKUNIXTOOLSPREFIX='/usr/local'
GITBRANCH=''
GITTAG=''
GITVERSION=''
GNULD=''
GREP=''
HAS_GIT='no-repository'
HAVE_GETHOSTBYNAME=''
HAVE_GETHOSTBYNAME_R=''
HAVE_GETHOSTBYNAME_R_3_ARG=''
HAVE_GETHOSTBYNAME_R_5_ARG=''
HAVE_GETHOSTBYNAME_R_6_ARG=''
INSTALL_DATA=''
INSTALL_PROGRAM=''
INSTALL_SCRIPT=''
INSTSONAME=''
LDCXXSHARED=''
LDFLAGS=''
LDFLAGS_NODIST=''
LDLIBRARY=''
LDLIBRARYDIR=''
LDSHARED=''
LDVERSION=''
LIBC=''
LIBFFI_INCLUDEDIR=''
LIBM=''
LIBOBJS=''
LIBPL=''
LIBPYTHON=''
LIBRARY=''
LIBS=''
LIBTOOL_CRUFT=''
LINKCC=''
LINKFORSHARED=''
LIPO_32BIT_FLAGS=''
LIPO_INTEL64_FLAGS=''
LLVM_AR=''
LLVM_AR_FOUND=''
LLVM_PROFDATA=''
LLVM_PROF_ERR=''
LLVM_PROF_FILE=''
LLVM_PROF_FOUND=''
LLVM_PROF_MERGER=''
LN=''
LTLIBOBJS=''
MACHDEP='linux'
MACHDEP_OBJS=''
MAINCC=''
MKDIR_P=''
MULTIARCH=''
MULTIARCH_CPPFLAGS=''
NO_AS_NEEDED=''
OBJEXT=''
OPENSSL_INCLUDES=''
OPENSSL_LDFLAGS=''
OPENSSL_LIBS=''
OPT=''
OTHER_LIBTOOL_OPT=''
PACKAGE_BUGREPORT='https://bugs.python.org/'
PACKAGE_NAME='python'
PACKAGE_STRING='python 3.9'
PACKAGE_TARNAME='python'
PACKAGE_URL=''
PACKAGE_VERSION='3.9'
PATH_SEPARATOR=':'
PGO_PROF_GEN_FLAG=''
PGO_PROF_USE_FLAG=''
PKG_CONFIG=''
PKG_CONFIG_LIBDIR=''
PKG_CONFIG_PATH='/opt/rh/devtoolset-8/root/usr/lib64/pkgconfig'
PLATFORM_TRIPLET=''
PLATLIBDIR=''
PROFILE_TASK=''
PY3LIBRARY=''
PYTHONFRAMEWORK=''
PYTHONFRAMEWORKDIR='no-framework'
PYTHONFRAMEWORKIDENTIFIER='org.python.python'
PYTHONFRAMEWORKINSTALLDIR=''
PYTHONFRAMEWORKPREFIX=''
PYTHON_FOR_BUILD='./$(BUILDPYTHON) -E'
PYTHON_FOR_REGEN='python3'
PY_ENABLE_SHARED=''
READELF=''
RUNSHARED=''
SED=''
SHELL='/bin/sh'
SHLIBS=''
SHLIB_SUFFIX=''
SOABI=''
SOVERSION='1.0'
SRCDIRS=''
TCLTK_INCLUDES=''
TCLTK_LIBS=''
THREADHEADERS=''
TRUE=''
TZPATH=''
UNIVERSALSDK=''
UNIVERSAL_ARCH_FLAGS=''
VERSION='3.9'
_PYTHON_HOST_PLATFORM=''
ac_ct_AR=''
ac_ct_CC='gcc'
ac_ct_CXX=''
ac_ct_READELF=''
bindir='${exec_prefix}/bin'
build='x86_64-pc-linux-gnu'
build_alias=''
build_cpu='x86_64'
build_os='linux-gnu'
build_vendor='pc'
datadir='${datarootdir}'
datarootdir='${prefix}/share'
docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
dvidir='${docdir}'
exec_prefix='NONE'
host='x86_64-pc-linux-gnu'
host_alias=''
host_cpu='x86_64'
host_os='linux-gnu'
host_vendor='pc'
htmldir='${docdir}'
includedir='${prefix}/include'
infodir='${datarootdir}/info'
libdir='${exec_prefix}/lib'
libexecdir='${exec_prefix}/libexec'
localedir='${datarootdir}/locale'
localstatedir='${prefix}/var'
mandir='${datarootdir}/man'
oldincludedir='/usr/include'
pdfdir='${docdir}'
prefix='NONE'
program_transform_name='s,x,x,'
psdir='${docdir}'
sbindir='${exec_prefix}/sbin'
sharedstatedir='${prefix}/com'
sysconfdir='${prefix}/etc'
target_alias=''
## ----------- ##
## confdefs.h. ##
## ----------- ##
/* confdefs.h */
#define _GNU_SOURCE 1
#define _NETBSD_SOURCE 1
#define __BSD_VISIBLE 1
#define _DARWIN_C_SOURCE 1
#define _PYTHONFRAMEWORK ""
#define _XOPEN_SOURCE 700
#define _XOPEN_SOURCE_EXTENDED 1
#define _POSIX_C_SOURCE 200809L
configure: exit 77
Is there something that I need to install within my Linux (Centos 7) to get this to work?
My current Python Version is 3.6.8
I have the following makefile:
g++ -o OUTPUT runner.cpp main.cpp src/file1.cpp src/file2.cpp
-L/usr/local/cpp_gpu/lib
-lopencv_gapi -lopencv_stitching -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_cudabgsegm -lopencv_cudafeatures2d -lopencv_cudaobjdetect -lopencv_cudastereo -lopencv_dnn_objdetect -lopencv_dnn_superres -lopencv_dpm -lopencv_face -lopencv_freetype -lopencv_fuzzy -lopencv_hdf -lopencv_hfs -lopencv_img_hash -lopencv_intensity_transform -lopencv_line_descriptor -lopencv_mcc -lopencv_quality -lopencv_rapid -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_superres -lopencv_surface_matching -lopencv_tracking -lopencv_highgui -lopencv_datasets -lopencv_text -lopencv_plot -lopencv_videostab -lopencv_cudaoptflow -lopencv_optflow -lopencv_cudalegacy -lopencv_videoio -lopencv_cudawarping -lopencv_xfeatures2d -lopencv_shape -lopencv_ml -lopencv_ximgproc -lopencv_video -lopencv_dnn -lopencv_xobjdetect -lopencv_objdetect -lopencv_calib3d -lopencv_imgcodecs -lopencv_features2d -lopencv_flann -lopencv_xphoto -lopencv_photo -lopencv_cudaimgproc -lopencv_cudafilters -lopencv_imgproc -lopencv_cudaarithm -lopencv_core -lopencv_cudev -ldl -lm -lpthread -lrt
-L/usr/local/cuda/lib64
-lcudart -lnppc -lnppial -lnppicc -lnppicom -lnppidei -lnppif -lnppig -lnppim -lnppist -lnppisu -lnppitc -lnpps -lcublas -lcudnn -lcufft
-I/usr/local/cpp_gpu/include/opencv4
I want to rewrite this to a Extension of distutils.core so that it later on can be used in a Cython setup. The documentation is a little unclear about how I should do this. This is what I got thus far (in setup.py).
includes_dirs = [numpy.get_include(), '/usr/local/cpp_gpu/lib', '/usr/local/cuda/lib64']
library_dirs = ['/usr/local/cpp_gpu/include/opencv4']
args = ['-Wno-cpp']
files = ["src/file1.cpp", "src/file2.cpp", "main.cpp", "OUTPUT.pyx"]
libraries = ['opencv_gapi', 'opencv_stitching', 'opencv_aruco', 'opencv_bgsegm', 'opencv_bioinspired', 'opencv_ccalib',
'opencv_cudabgsegm', 'opencv_cudafeatures2d', 'opencv_cudaobjdetect', 'opencv_cudastereo', 'opencv_dnn_objdetect',
'opencv_dnn_superres', 'opencv_dpm', 'opencv_face', 'opencv_freetype', 'opencv_fuzzy', 'opencv_hdf', 'opencv_hfs',
'opencv_img_hash', 'opencv_intensity_transform', 'opencv_line_descriptor', 'opencv_mcc', 'opencv_quality',
'opencv_rapid', 'opencv_reg', 'opencv_rgbd', 'opencv_saliency', 'opencv_stereo', 'opencv_structured_light',
'opencv_phase_unwrapping', 'opencv_superres', 'opencv_surface_matching', 'opencv_tracking', 'opencv_highgui',
'opencv_datasets', 'opencv_text', 'opencv_plot', 'opencv_videostab', 'opencv_cudaoptflow', 'opencv_optflow',
'opencv_cudalegacy', 'opencv_videoio', 'opencv_cudawarping', 'opencv_xfeatures2d', 'opencv_shape', 'opencv_ml',
'opencv_ximgproc', 'opencv_video', 'opencv_dnn', 'opencv_xobjdetect', 'opencv_objdetect', 'opencv_calib3d',
'opencv_imgcodecs', 'opencv_features2d', 'opencv_flann', 'opencv_xphoto', 'opencv_photo', 'opencv_cudaimgproc',
'opencv_cudafilters', 'opencv_imgproc', 'opencv_cudaarithm', 'opencv_core', 'opencv_cudev', 'dl', 'm', 'pthread', 'rt']
libraries += ['cudart', 'nppc', 'nppial', 'nppicc', 'nppicom', 'nppidei', 'nppif', 'nppig', 'nppim', 'nppist', 'nppisu',
'nppitc', 'npps', 'cublas', 'cudnn', 'cufft ']
ext_modules = [Extension(
"OUTPUT",
files,
language='c++',
include_dirs=includes_dirs,
library_dirs=library_dirs,
extra_compile_args=args,
libraries=libraries
)]
The compilation however fails with the error message (works fine if used with the makefile):
src/file1.hpp:7:34: fatal error: opencv2/cudaarithm.hpp: No such file or directory
#include <opencv2/cudaarithm.hpp>
I found the problem the include dirs and library dirs should be switched:
includes_dirs = [numpy.get_include(), '/usr/local/cpp_gpu/include/opencv4']
library_dirs = ['/usr/local/cpp_gpu/lib', '/usr/local/cuda/lib64']
My objective is to give an ECG_sample (integer) and reference of the structure as input to the C code and create a python wrapper for the same. Below is what I have attempted. The structure HRV_index actually has 5 members(4 floats and 1 integer), but I am trying the code for just one member.
gcc -c -Wall -Werror -fpic test.c
gcc -shared test.o -o test.so to create the shared library
C code:
void simple_function(int16_t ecg_wave_sample,HRV_index *HRV)
{
Filter_CurrentECG_sample(&ecg_wave_sample, &ecg_filterout); // filter out the line noise #40Hz cutoff 161 order
Calculate_HeartRate(ecg_filterout,&global_HeartRate,&npeakflag); // calculate
if(npeakflag == 1)
{
read_send_data(global_HeartRate,*HRV);
printf("NN50: %d\n",HRV->nn50);
}
}
python code:
def wrap_function(lib, funcname, restype, argtypes):
''' Simplify wrapping ctypes functions '''
func = lib.__getattr__(funcname)
func.restype = restype
func.argtypes = argtypes
return func
class HRV_index(ctypes.Structure):
#_fields_ = [('mean', ctypes.c_float), ('sdnn', ctypes.c_float),('nn50', ctypes.c_int), ('pnn50', ctypes.c_float),('rmssd', ctypes.c_float)]
_fields_ = [('nn50', ctypes.c_int)]
def __repr__(self):
return '({0})'.format( self.nn50)
if __name__ == '__main__':
# load the shared library into c types. NOTE: don't use a hard-coded path in production code, please
libc = ctypes.CDLL("./test.so")
record = wfdb.rdrecord("/home/yasaswini/hp2-notebooks/notebooks/Algorithm_testing_on_database/MIT-BIH/100", channels=[0],sampto = 1000)
ECG_samples = record.p_signal[:,0]
ECG_samples = ECG_samples * 1000
Heart_rate_array = np.zeros(len(ECG_samples),dtype = np.int32)
print("Pass by reference")
simple_function = wrap_function(libc, 'simple_function', None, [ctypes.c_int,ctypes.POINTER(HRV_index)])
a = HRV_index(0)
print("Point in python is", a)
for i in range(len(ECG_samples)):
simple_function(ECG_samples[i], a)
print("Point in python is", a)
print()
I getting this error:
Pass by reference
Point in python is (0)
Traceback (most recent call last):
File "/home/yasaswini/hp2-notebooks/notebooks/Algorithm_testing_on_database /structure_python_wrapper/test.py", line 45, in <module>
simple_function(ECG_samples[i], a)
ctypes.ArgumentError: argument 1: <class 'TypeError'>: wrong type
The ECG_sample[i] is an integer, so why is it showing wrong type error?
Running the same python script using python3 or through an embedded interpreter using libpython3 gives different execution times.
$ time PYTHONPATH=. ./simple
real 0m6,201s
user 1m3,680s
sys 0m0,212s
$ time PYTHONPATH=. python3 -c 'import test; test.run()'
real 0m5,193s
user 0m53,349s
sys 0m0,164s
(removing the content of __pycache__ between runs does not seem to have an impact)
Currently, calling python3 with the script is faster; on my actual use case the factor is 1.5 faster, compared to the same script ran from within an embedded interpreter.
I would like to (1) understand where does the difference come from and (2) if it is possible to have the same performance using an embedded interpreter? (using e.g. cython is currently not an option).
Code
simple.cpp
#include <Python.h>
int main()
{
Py_Initialize();
const char* pythonScript = "import test; test.run()";
int result = PyRun_SimpleString(pythonScript);
Py_Finalize();
return result;
}
Compilation:
g++ -std=c++11 -fPIC $(python3-config --cflags) simple.cpp \
$(python3-config --ldflags) -o simple
test.py
import sys
sys.stdout = open('output.bin', 'bw')
import mandel
def run():
mandel.mandelbrot(4096)
mandel.py
Tweaked version from benchmarks-game's Mandlebrot (see License)
from contextlib import closing
from itertools import islice
from os import cpu_count
from sys import stdout
def pixels(y, n, abs):
range7 = bytearray(range(7))
pixel_bits = bytearray(128 >> pos for pos in range(8))
c1 = 2. / float(n)
c0 = -1.5 + 1j * y * c1 - 1j
x = 0
while True:
pixel = 0
c = x * c1 + c0
for pixel_bit in pixel_bits:
z = c
for _ in range7:
for _ in range7:
z = z * z + c
if abs(z) >= 2.: break
else:
pixel += pixel_bit
c += c1
yield pixel
x += 8
def compute_row(p):
y, n = p
result = bytearray(islice(pixels(y, n, abs), (n + 7) // 8))
result[-1] &= 0xff << (8 - n % 8)
return y, result
def ordered_rows(rows, n):
order = [None] * n
i = 0
j = n
while i < len(order):
if j > 0:
row = next(rows)
order[row[0]] = row
j -= 1
if order[i]:
yield order[i]
order[i] = None
i += 1
def compute_rows(n, f):
row_jobs = ((y, n) for y in range(n))
if cpu_count() < 2:
yield from map(f, row_jobs)
else:
from multiprocessing import Pool
with Pool() as pool:
unordered_rows = pool.imap_unordered(f, row_jobs)
yield from ordered_rows(unordered_rows, n)
def mandelbrot(n):
write = stdout.write
with closing(compute_rows(n, compute_row)) as rows:
write("P4\n{0} {0}\n".format(n).encode())
for row in rows:
write(row[1])
So apparently the time difference comes from either linking with libpython statically vs. dynamically. In a Makefile sitting next to python.c (from the reference implementation), the following builds a statically linked version of the interpreter:
snake: python.c
g++ \
-I/usr/include/python3.6m \
-pthread \
-specs=/usr/share/dpkg/no-pie-link.specs \
-specs=/usr/share/dpkg/no-pie-compile.specs \
\
-Wall \
-Wformat \
-Werror=format-security \
-Wno-unused-result \
-Wsign-compare \
-DNDEBUG \
-g \
-fwrapv \
-fstack-protector \
-O3 \
\
-Xlinker -export-dynamic \
-Wl,-Bsymbolic-functions \
-Wl,-z,relro \
-Wl,-O1 \
python.c \
/usr/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6m.a \
-lexpat \
-lpthread \
-ldl \
-lutil \
-lexpat \
-L/usr/lib \
-lz \
-lm \
-o $#
Changing the line /usr/lib/.../libpython3.6m.a with -llibpython3.6m builds the version that ends up being slower (also need -L/usr/lib/python3.6/config-3.6m-x86_64-linux-gnu)
Epilog
The difference in speed exists but is not the full answer to my original problem; in practice the "slower" interpreter was executed under a specific LD_PRELOAD environment which changed how system time functions behaves in a way that messed up with cProfile.
I've been trying to make python build from sources through cygwin64 on windows7. I've ran into some issues I've been able to fix but I'm stuck on this one.
I get a "Fatal python error: Could not allocate TLS entry".
Here's the end of the build logs.
gcc -c -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g - fwrapv -O3 -Wall -Wstrict-prototypes -Werror=declaration-after-statement -I. -IInclude -I./Include -I/home/mcharron/.pyenv/versions/3.5.1/include -I/home/mcharron/.pyenv/versions/3.5.1/include -DPy_BUILD_CORE \
-DHGVERSION="\"`LC_ALL=C `\"" \
-DHGTAG="\"`LC_ALL=C `\"" \
-DHGBRANCH="\"`LC_ALL=C `\"" \
-o Modules/getbuildinfo.o ./Modules/getbuildinfo.c
gcc -L/home/mcharron/.pyenv/versions/3.5.1/lib -L/home/mcharron/.pyenv/versions/3.5.1/lib -o Programs/_freeze_importlib Programs/_freeze_importlib.o Modules/getbuildinfo.o Parser/acceler.o Parser/grammar1.o Parser/listnode.o Parser/node.o Parser/parser.o Parser/bitset.o Parser/metagrammar.o Parser/firstsets.o Parser/grammar.o Parser/pgen.o Parser/myreadline.o Parser/parsetok.o Parser/tokenizer.o Objects/abstract.o Objects/accu.o Objects/boolobject.o Objects/bytes_methods.o Objects/bytearrayobject.o Objects/bytesobject.o Objects/cellobject.o Objects/classobject.o Objects/codeobject.o Objects/complexobject.o Objects/descrobject.o Objects/enumobject.o Objects/exceptions.o Objects/genobject.o Objects/fileobject.o Objects/floatobject.o Objects/frameobject.o Objects/funcobject.o Objects/iterobject.o Objects/listobject.o Objects/longobject.o Objects/dictobject.o Objects/odictobject.o Objects/memoryobject.o Objects/methodobject.o Objects/moduleobject.o Objects/namespaceobject.o Objects/object.o Objects/obmalloc.o Objects/capsule.o Objects/rangeobject.o Objects/setobject.o Objects/sliceobject.o Objects/structseq.o Objects/tupleobject.o Objects/typeobject.o Objects/unicodeobject.o Objects/unicodectype.o Objects/weakrefobject.o Python/_warnings.o Python/Python-ast.o Python/asdl.o Python/ast.o Python/bltinmodule.o Python/ceval.o Python/compile.o Python/codecs.o Python/dynamic_annotations.o Python/errors.o Python/frozenmain.o Python/future.o Python/getargs.o Python/getcompiler.o Python/getcopyright.o Python/getplatform.o Python/getversion.o Python/graminit.o Python/import.o Python/importdl.o Python/marshal.o Python/modsupport.o Python/mystrtoul.o Python/mysnprintf.o Python/peephole.o Python/pyarena.o Python/pyctype.o Python/pyfpe.o Python/pyhash.o Python/pylifecycle.o Python/pymath.o Python/pystate.o Python/pythonrun.o Python/pytime.o Python/random.o Python/structmember.o Python/symtable.o Python/sysmodule.o Python/traceback.o Python/getopt.o Python/pystrcmp.o Python/pystrtod.o Python/pystrhex.o Python/dtoa.o Python/formatter_unicode.o Python/fileutils.o Python/dynload_shlib.o Python/thread.o Modules/config.o Modules/getpath.o Modules/main.o Modules/gcmodule.o Modules/_threadmodule.o Modules/signalmodule.o Modules/posixmodule.o Modules/errnomodule.o Modules/pwdmodule.o Modules/_sre.o Modules/_codecsmodule.o Modules/_weakref.o Modules/_functoolsmodule.o Modules/_operator.o Modules/_collectionsmodule.o Modules/itertoolsmodule.o Modules/atexitmodule.o Modules/_stat.o Modules/timemodule.o Modules/_localemodule.o Modules/_iomodule.o Modules/iobase.o Modules/fileio.o Modules/bytesio.o Modules/bufferedio.o Modules/textio.o Modules/stringio.o Modules/zipimport.o Modules/faulthandler.o Modules/_tracemalloc.o Modules/hashtable.o Modules/symtablemodule.o Modules/xxsubtype.o -ldl -lm
./Programs/_freeze_importlib \
./Lib/importlib/_bootstrap.py Python/importlib.h
./Programs/_freeze_importlib \
./Lib/importlib/_bootstrap_external.py Python/importlib_external.h
Fatal Python error: Could not allocate TLS entry
Fatal Python error: Could not allocate TLS entry
Stack trace:
Frame Function Args
000FFFFC2E0 001800719AC (000FFFFE3F4, 0000000ECD0, 7FEFCE851A8, 000FFFFDE50)
000FFFFC380 00180072F8B (00000000001, 00000000000, 000000000E8, 00000000000)
000FFFFC5D0 001801343E8 (001800C78E9, 00000000000, 7FEFD051430, 00000000000)
000FFFFC8C0 001801310BE (0000000D0BD, 00000000000, 00000000000, 00100636B54)
000FFFFC9E0 00180131539 (000FFFFC900, 00000000000, 00000000000, 00000000006)
000FFFFC9E0 0018013170A (0018020BB68, 00100636B3E, 001FFFFC9C8, 00000000000)
000FFFFC9E0 001801319CF (0018012CDEB, 00100637665, 001801523A0, 00000000000)
000FFFFC9E0 0010052A23E (0010052B4C7, 006000104D8, 00000000000, 00000000000)
00000000001 0010052CC6C (00000000000, 00000000000, 006000104D8, 00000000000)
00000000001 0010052AB86 (00000000000, 001801D4120, 000FFFFCBB0, 00100000001)
00180351670 001005A0241 (00000000000, 00000000000, 00000000030, 30001010100FF00)
000FFFFCCC0 00180047BD2 (00000000000, 00000000000, 00000000000, 00000000000)
00000000000 0018004591C (00000000000, 00000000000, 00000000000, 00000000000)
000FFFFFFF0 001800459B4 (00000000000, 00000000000, 00000000000, 00000000000)
End of stack trace
Makefile:729 : la recette pour la cible « Python/importlib_external.h » a échouée
make: *** [Python/importlib_external.h] Aborted (core dump créé)
make: *** Attente des tâches non terminées....
Makefile:733 : la recette pour la cible « Python/importlib.h » a échouée
make: *** [Python/importlib.h] Aborted (core dump créé)
Has anyone seen this or has a workaround?
Thanks!