I am trying to compile/bind a python extension written in C++ that uses NEON intrinsics using
setuptools build of PyBind11. But it keep giving me errors.
(arm_neon.h:28:2: error: "NEON intrinsics not available with the soft-float ABI. Please use -mfloat-abi=softfp or -mfloat-abi=hard"
#error "NEON intrinsics not available with the soft-float ABI. Please use -mfloat-abi=softfp or -mfloat-abi=hard")
To reproduce:
clone https://github.com/pybind/python_example
Add #include <arm_neon.h> to the main.cpp
Then I tried to install/build it using pip, this gives me the following error:
arm_neon.h:28:2: error: "NEON intrinsics not available with the soft-float ABI. Please use -mfloat-abi=softfp or -mfloat-abi=hard"
#error "NEON intrinsics not available with the soft-float ABI. Please use -mfloat-abi=softfp or -mfloat-abi=hard"
So, I tried to add these options to the compiler flags by defining:
extra_compile_args=["-mfloat-abi=hard", "-O3", "-mcpu=native"]
But It still fails, and I see from the output:
clang: warning: argument unused during compilation: '-mfloat-abi=hard' [-Wunused-command-line-argument]
However there are some gcc parts in the output as well, so I tried to force the clang++ compiler by
setting:
os.environ["CC"] = "clang++"
In the top of setup.py
However I still get the same error.
(I have also tried a bunch of other tricks, but I feel that im just searching in the wrong direction so I will not list these).
I can compile an stand alone c++ file with clang so it seems like im doing something wrong with the setuptools configurations.
I am running a Macbook Pro M2.
So I figured it out. Turns out the default anaconda uses x86_64 and Rosetta instead of native ARM.
So you have to download the miniconda that supports ARM to make this work!
Related
I've been working on a GitHub project that cross compiles Python for Android
https://github.com/GRRedWings/python3-android/tree/clang
Google is deprecating gcc in the NDK soon, so I have been trying to switch from using gcc, to clang.
I stumbled across this project a couple of years back and have been trying to maintain it with current versions of the libraries, but this one has me stumped. I have updated the branch above, and I think it's compiling with clang, but it's failing to link with the following error
/home/python3-android/sdk/android-ndk-r16b/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/aarch64-linux-android/bin/ld: unrecognised emulation mode: elf_x86_64
Supported emulations: aarch64linux aarch64elf aarch64elf32 aarch64elf32b aarch64elfb armelf armelfb aarch64linuxb aarch64linux32 aarch64linux32b armelfb_linux_eabi armelf_linux_eabi
clang: error: linker command failed with exit code 1 (use -v to see invocation)
../Makefile.shared:164: recipe for target 'link_app.' failed
At the end of the first line it's saying unrecognised emulation mode elf_x86_64. I don't understand where it's getting that emulation mode, or how to change it.
I get the same error for arm or arm64. I use 2 files to setup the environment and the makefile variables
Env -- https://github.com/GRRedWings/python3-android/blob/clang/env
and
build_single.sh -- https://github.com/GRRedWings/python3-android/blob/clang/mk/build_single.sh
I am relatively new to cross-compile and what is needed, and at this point just don't know where else to look.
Based on the script I inherited, I have both CPPFLAGS and LDFLAGS starting with
-target aarch64-none-linux-android -gcc-toolchain ${NDK_ROOT}/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64
You aren't passing -target aarch64-linux-android, so Clang is defaulting to targeting x86_64, but using the aarch64 linker that you provided.
The NDK ships a tool to do this work for you: https://developer.android.com/ndk/guides/standalone_toolchain.html
We wish to create an R package that wraps the Python runtime and is 'dependency free' (e.g. one need not install Python for the R package to work). There is already a package that allows R to call Python (CRAN package rPython) but it requires that Python be installed on the target machine. We would like all dependencies to be installed when the envisioned package is installed via the standard R package install mechanism. I forked rPython and created this variation: https://github.com/brucehoff/rWithPython. The package works on Unix and Mac. It downloads and builds the Python source and then accesses the Python runtime from R. By using a static version of libpython it builds a shared object that can be installed on another machine without leaving any dependencies behind. The problem is how to get it to work on Windows. On windows, R packages are built using "R Tools" (https://cran.r-project.org/bin/windows/Rtools/) which uses the cygwin/mingw stack. I tried running configure/make as on Unix, but it fails. I then tried linking a static libpython that I built on a Windows box using Visual Studio. Python is very helpful, providing instructions for changing the MVS build to create a static library:
PCBuild\readme.txt says, in part:
The solution has no configuration for static libraries. However it is
easy to build a static library instead of a DLL. You simply have to set
the "Configuration Type" to "Static Library (.lib)" and alter the
preprocessor macro "Py_ENABLE_SHARED" to "Py_NO_ENABLE_SHARED". You may
also have to change the "Runtime Library" from "Multi-threaded DLL
(/MD)" to "Multi-threaded (/MT)".
This works great and I get a static library for python. However when I try to link the library under cygwin/rtools the linker gives an error:
gcc -m32 -I"C:/bin/R/include" -DNDEBUG -I"d:/RCompile/CRANpkg/extralibs64/local/include" -I"C:/Python35/Include" -I"C:/Python35/PC" -O3 -Wall -std=gnu99 -mtune=core2 -c pycall.c -o pycall.o
gcc -m32 -shared -s -static-libgcc -o rWithPython.dll tmp.def pycall.o -LC:/Python35/PCbuild/win32 -lpython35 -Ld:/RCompile/CRANpkg/extralibs64/local/lib/i386 -Ld:/RCompile/CRANpkg/extralibs64/local/lib -LC:/bin/R/bin/i386 -lR
c:/rtools/gcc-4.6.3/bin/../lib/gcc/i686-w64-mingw32/4.6.3/../../../../i686-w64-mingw32/bin/ld.exe: C:/Python35/PCbuild/win32/libpython35.a(C:/hgpy/cpython/PCbuild/obj//win32_Release/pythoncore/getbuildinfo.obj): Recognised but unhandled machine type (0x14c) in Import Library Format archive
pycall.o:pycall.c:(.text+0x5): undefined reference to '_imp__Py_Initialize'
pycall.o:pycall.c:(.text+0x1a): undefined reference to '_imp__PyRun_SimpleStringFlags'
pycall.o:pycall.c:(.text+0x31): undefined reference to '_imp__Py_Finalize'
pycall.o:pycall.c:(.text+0x56): undefined reference to '_imp__PyRun_SimpleStringFlags'
pycall.o:pycall.c:(.text+0x8c): undefined reference to '_imp__PyImport_AddModule'
pycall.o:pycall.c:(.text+0x95): undefined reference to '_imp__PyModule_GetDict'
pycall.o:pycall.c:(.text+0xa8): undefined reference to '_imp__PyDict_GetItemString'
pycall.o:pycall.c:(.text+0xb5): undefined reference to '_imp__PyUnicode_AsUTF8String'
collect2: ld returned 1 exit status
no DLL was created
ERROR: compilation failed for package 'rWithPython'
From what I've read "machine type" 0x14c is "Intel 386 or later, and compatible processors", i.e. the most common/expected machine type. So I'm guessing the error is a red herring, the problem is an incompatibility between compilers, not machines.
Any suggestions on how to proceed are appreciated!!
---- UPDATE ----
I verified that I can build/link (1) when linking in the library(ies) that are part of the standard Windows Python installation and (2) when building from source using MS Visual Studio, without modifying the build in any way. The problem is replicated as soon as I modify the Visual Studio build settings to produce a static library (python35.lib) following the guidelines in PCbuild\readme.txt. Those guidelines are a bit ambiguous. I will probe further, but if anyone has had success generating a static Python library in MS VS, please let me know!
---- ANOTHER UPDATE ----
We have a solution to this question:
https://github.com/Sage-Bionetworks/PythonEmbedInR
(The code is open source, under the GPL-3 license.)
You can see how we solved the problem of running on Windows:
https://github.com/Sage-Bionetworks/PythonEmbedInR/blob/master/configure.win
In short we do not try to compile from source or make a static library, rather we use Python's 'embeddable zip' and make sure its libraries are on the search path for our application. It seems to work great!
You need to put the Py_NO_ENABLE_SHARED define in your application code as well. Basically, anywhere the Python.h file is included. If not present, the include file assumes you have linked with a .dll and acts accordingly (and incorrectly.)
I've been stuck on this issue for a while now. I'm trying to install graph-tool - http://graph-tool.skewed.de/download#macos - and I have the prereqs from following these steps, which the graph-tool site links to: https://gist.github.com/openp2pdesign/8864593
Instead of brew install, which didn't seem to give me all the files, I went to Boost's official site and downloaded from there properly, following these steps: http://www.boost.org/doc/libs/1_41_0/more/getting_started/unix-variants.html It's mainly getting a tar file and untarring it.
I then put my boost install here:
/usr/local/boost_1_55_0
I did a small C++ example and confirmed Boost works (using "Build a Simple Program Using Boost" from http://www.boost.org/doc/libs/1_41_0/more/getting_started/unix-variants.html.
Now the meat of the problem: trying to install graph-tool. In the very last step, I do
./configure PYTHON_EXTRA_LDFLAGS="-L/usr/local/bin"
(The PYTHON_EXTRA_LDFLAGS="-L/usr/local/bin" just makes the configure script find Python alright.)
But I get this error. (It finds Python fine, but not boost!)
...
================
Detecting python
================
checking for a Python interpreter with version >= 2.6... python
checking for python... /Users/daze/Library/Enthought/Canopy_64bit/User/bin/python
checking for python version... 2.7
checking for python platform... darwin
checking for python script directory... ${prefix}/lib/python2.7/site-packages
checking for python extension module directory... ${exec_prefix}/lib/python2.7/site-packages
checking for python2.7... (cached) /Users/daze/Library/Enthought/Canopy_64bit/User/bin/python
checking for a version of Python >= '2.1.0'... yes
checking for a version of Python == '2.7.3'... yes
checking for the distutils Python package... yes
checking for Python include path... -I/Applications/Canopy.app/appdata/canopy-1.1.0.1371.macosx-x86_64/Canopy.app/Contents/include/python2.7
checking for Python library path... -L/Applications/Canopy.app/appdata/canopy-1.1.0.1371.macosx-x86_64/Canopy.app/Contents/lib/python2.7/config -lpython2.7
checking for Python site-packages path... /Users/daze/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages
checking python extra libraries... -ldl -framework CoreFoundation
checking python extra linking flags... -L/usr/local/bin
checking consistency of all components of python development environment... yes
graph-tool will be installed at: /Users/daze/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages
===========================
Using python version: 2.7.3
===========================
checking for boostlib >= 1.38.0... configure: error: We could not detect the boost
libraries (version 1.38 or higher). If you have a staged boost library (still not installed)
please specify $BOOST_ROOT in your environment and do not give a PATH to --with-boost option.
If you are sure you have boost installed, then check your version number looking in
<boost/version.hpp>. See http://randspringer.de/boost for more documentation.
Attempt 2: I then tried setting BOOST_ROOT properly:
In my ~/.bash_profile:
export BOOST_ROOT="/usr/local/boost_1_55_0"
But it still did no good, so I unset that.
Attempt 3: I then tried explicitly specifying where boost is installed:
./configure --with-boost="/usr/local/boost_1_55_0" PYTHON_EXTRA_LDFLAGS="-L/usr/local/bin"
But it still can't find boost, and yields that same error in the end of "We could not detect the boost libraries (version 1.38 or higher)."
It's been bugging me all day. I've read carefully, and went to the randspringer.de/boost site and saw this in the FAQ - http://www.randspringer.de/boost/faq.html#id2514912:
Q: I do not understand the configure error message
At configure time I get:
checking for boostlib >= 1.33... configure: error: We could not detect
the boost libraries (version 1.33 or higher). If you have a staged
boost library (still not installed) please specify $BOOST_ROOT in your
environment and do not give a PATH to --with-boost option. If you are
sure you have boost installed, then check your version number looking
in . See http://randspringer.de/boost for more
documentation.
I don't know if I use a staged version of boost. What is it and what
can I do ?
A: If you did not compile Boost by yourself you don't have a staged
version and you don't have to set BOOST_ROOT. Look here for an
explanation of different kind of installations.
If you are sure you have Boost installed then specify the directory
with
./configure --with-boost=your-boost-directory.
If it still does not work, please check the version number in
boost/version.hpp and compare it with the version requested in
configure.ac.
And I don't know what to see when comparing version numbers. There's nothing I found interesting there.
Hoping someone has at least an idea on what other approaches to take.
Hooray, my first chance to give back to Stack Overflow! I've been dealing with this issue myself the past 2 days.
Solution
Upgrade clang via Xcode
Make a symlink to boost that includes the version number
/usr/local/include/boost-1_55.0 -> ../Cellar/boost/1.55.0/include/boost
(included because I installed Boost using Brew and had this issue)
Edit the generation of CXXFLAGS in configure so that it looks like this:
old_cxxflags="$CXXFLAGS"
CXXFLAGS="${CXXFLAGS} -std=gnu++11 -stdlib=libc++"
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether C++ compiler supports -std=gnu++11" >&5
$as_echo_n "checking whether C++ compiler supports -std=gnu++11... " >&6; }
Run
./configure --disable-sparsehash CXX="/usr/bin/clang++" PYTHON_EXTRA_LDFLAGS="-L/usr/local/bin"
Versions
OS: Mac OS X 10.8.5
Clang: Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
Target: x86_64-apple-darwin12.5.0
Thread model: posix
Graph-tool: 2.2.29.1
Boost: 1.55.0
Explanation
If you go through the configure code and try and compile the confdefs.h files made in configure, you'll see clang error out upon encountering the -Wno-unused-local-typedefs flag. This is the actual cause of the "We could not detect the boost libraries (version 1.33 or higher)" error, not the fact that it can't find the boost files. This issue is fixed with newer versions of clang.
The configure test for version number is goofy. It expects the boost include directory to contain the version number.
While running make, you may run into the following errors:
./../graph_adjacency.hh:26:10: fatal error: 'tuple' file not found
This is caused by referencing the wrong standard library [1]
./../graph_adaptor.hh:655:39: error: expected ';' in 'for' statement specifier
for(typeof(removed_edges.begin()) iter = removed_edges.begin();
./../graph_adaptor.hh:655:39: error: use of undeclared identifier 'tier'
This is caused by referencing the wrong C++ standard (c++11 instead of gnu++11)
References
[1] No member named 'forward' in namespace 'std'
[2] I'm having some trouble with C++11 in Xcode
I think that you're currently pointing --with-boost to the boost parent directory, not the boost libraries.
Try
./configure --with-boost="/usr/local/boost_1_55_0/libs/" PYTHON_EXTRA_LDFLAGS="-L/usr/local/bin"
After much effort, I've finally got matplotlib, and all its dependencies, working harmoniously on Snow Leopard 10.6.8. I'd now like to tweak its configuration slightly to allow me to use my 32-bit installation of wxPython as its backend. The problem is that numpy (required by matplotlib) won't import when I use my 32-bit installation of Python 2.7.3 (python.org version). Googling for an hour or so has led me to believe that numpy can be built and installed as 32-bit by specifying CFLAGS and LDFLAGS inconjunction with setup.py. I'm not clear on what these flags do, and not surprisingly I've had no success using them. This is what I tried from within the downloaded numpy folder:
$ CLFLAGS=-m32 LDFLAGS=-m32 python setup.py install
I get a few error messages, but a 64-bit compatible version of numpy does arrive in my sitepackages folder. When I use the 32-bit interpreter however I get an error:
ImportError: dynamic module does not define init function (initmultiarray)
Am I right to think I can build a 32-bit numpy?
I just spent a couple days looking around and pulling my hair out so I thought I'd contribute here to what I found...
I had the same problem but just setting the flags would not work for me (but this is needed indeed) ... In my case I have a seperate 32-bit version of python, so I did:
CFLAGS="-m32" LDFLAGS="-m32" /util/linux32/bin/python setup.py install --prefix=/util/science/gfortran-4.4.6/linux32/
(don't worry about my gfortran thing in the prefix, lucky me had to test different compilers.. ;) )
but then I would get an error, the last line would say:
"RuntimeError: Broken toolchain: cannot link a simple C program"
but if I scrolled up I had:
gcc -pthread _configtest.o -o _configtest
_configtest.o: could not read symbols: File in wrong format
collect2: ld returned 1 exit status
_configtest.o: could not read symbols: File in wrong format
collect2: ld returned 1 exit status
failure.
removing: _configtest.c _configtest.o
and as you can see, no "-m32" flag in that call to gcc...
I tracked it back to the distutils install ; for me in there:
/util/linux32/lib/python2.7/distutils/ccompiler.py
and there is probably a more elegant solution than that, like getting the cflags value directly, but I am not a python girl so not sure how.. ;) I could probably figure it out but all I care about right now is to finally install numpy in 32-bit mode..
so anyways... line 693 of this python code, I changed
runtime_library_dirs=None, debug=0, extra_preargs=None,
to
runtime_library_dirs=None, debug=0, extra_preargs=['-m32'],
(in the function link_executable ; in case you have a different version of python... )
and voila... numpy in installed succesfully on a 64-bit machine in 32-bit mode.. I assume this would work for other modules too since it is related to distutils, not numpy.. ;)
hope this can help someone in the future and save some time!
Eve-Marie
You could try using the free version of EPD (or the full version is free if you're in academia):
http://www.enthought.com/products/epd_free.php/
This has a 32-bit version for mac with all of the key scientific stack packages including scipy, numpy and matplotlib.
Trying to install py-bcrypt on win7. Python is 64bit. First error unable to find vcvarsall.bat. Googled a bit learned that i needed to install mingw. installed it now this
C:\tools\python_modules\py-bcrypt-0.2>python setup.py build -c mingw32
running build
running build_py
running build_ext
building 'bcrypt._bcrypt' extension
C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -Ic:\Python27\include -Ic:\Python27\PC -c bcrypt/bcrypt_python.c -o b
d\temp.win-amd64-2.7\Release\bcrypt\bcrypt_python.o
bcrypt/bcrypt_python.c:29:26: error: expected declaration specifiers or '...' before 'u_int8_t'
bcrypt/bcrypt_python.c:29:38: error: expected declaration specifiers or '...' before 'u_int16_t'
bcrypt/bcrypt_python.c:29:49: error: expected declaration specifiers or '...' before 'u_int8_t'
bcrypt/bcrypt_python.c: In function 'bcrypt_encode_salt':
bcrypt/bcrypt_python.c:56:2: error: too many arguments to function 'encode_salt'
bcrypt/bcrypt_python.c:29:6: note: declared here
error: command 'gcc' failed with exit status 1
no idea what to do next. guess i'll just not use bcrypt and try something else. Any other suggestions?
There is a compiled version of py-bcrypt for windows. You can visit https://bitbucket.org/alexandrul/py-bcrypt/downloads to download the .exe file and install.
I've looked at the bcrypt source, and can't figure out why you're getting the error you are (don't have a Windows system at hand to test on right now). Though looking at the pybcrypt issue tracker it looks like it has other Windows compilation problems, so it's probably not just you. At a guess though, adding "--std=C99" to the gcc arguments via extra_compile_args might fix at least some of the errors.
Aside from that, there are a couple of alternatives -
Bcryptor is another C-extension bcrypt implementation which may compile for your system.
Passlib is a general password hashing library. While it relies on bcryptor/pybcrypt for bcrypt support, it has builtin support for a number of other password hashes that may work for you - such as SHA512-Crypt or PBKDF2-HMAC-SHA512
Cryptacular is another general password hashing library. On Windows, it provides both BCrypt and PBKDF2-HMAC-SHA512 password hashes. (I'd link straight to those, but the documentation won't quite let me).
I stumbled upon this rather old thread while trying to get py-bcrypt installed (via pip) on Windows 7 using VS2012. Apparently, this still doesn't work (I also get the "missing vcvars.bat" error).
There is a dedicated Windows fork for py-bcrypt called py-bcrypt-w32, which I could install without any problems using
pip install py-bcrypt-w32
I had the same issue and I fixed it by applying the patch found at this link:
http://code.google.com/p/py-bcrypt/issues/detail?can=2&start=0&num=100&q=&colspec=ID%20Type%20Status%20Priority%20Milestone%20Owner%20Summary&groupby=&sort=&id=1
py-bcrypt_11.patch
Had to apply it manually.
From that thread, the source of the problem is
According to http://groups.google.com/group/mpir-devel/msg/2c2d4cc7ec12adbb (flags defined under the various windows OS'es ,cygwins,mingw's and other's) its better to use _WIN32 instead of _MSC_VER, Together with the change from bzero to memset this compiles both under MSVC and MingW32.
Hope that helps!
supposing you are using mingw64, you should change _MSC_VER in _WIN32 on ifdefs into bcrypt.c, bcrypt_python.c and pybc_blf.h
I had this same problem with python 3.4.1, and none of the previous answers worked. I eventually got the Visual Studio 2010 64-bit compiler working, and hence both cryptacular and py-bcrypt installed with easy_install. See my detailed answer here: https://stackoverflow.com/a/27033824/3800244
It's 2016 and I have faced same issue. Download the wheel directly from https://bitbucket.org/alexandrul/py-bcrypt/downloads and then run following
pip install <whl-file>