I have simple python + cython project (hello world example from http://docs.cython.org/src/tutorial/cython_tutorial.html) on my ubuntu 16 x86_64. I can build this project with cython for x86_64.
How can I build the project for armv7 version of ubuntu 15 without using real armv7 board/cpu?
I have arm-linux-gnueabihf-gcc (http://packages.ubuntu.com/xenial/devel/gcc-arm-linux-gnueabihf) and it can compile simple C programs for armv7. How can I change settings of cython to use cross compiler for building shared objects for arm?
Architecture dependent libraries and headers files are needed for cross compiling.
When testing if python3.5-dev package and others could be installed after dpkg --add-architecture armhf and apt-get update (after some modification to sources.list), the result was basically.
python3.5-dev:armhf : Depends: python3.5:armhf (= 3.5.1-10) but it is not going to be installed
apt-get install python3.5:armhf is something that doesn't work, see
The existing proposals allow for the co-installation of libraries and
headers for different architectures, but not (yet) binaries.
One possible solution that does not require "full" virtual machine is provided by QEMU and chroot. A suitable directory for chroot can be created by debootstrap command. After creation, schroot can give access to that environment.
Substitute <DIRECTORY> and <USER> in the following commands:
apt-get install -y debootstrap qemu-user-static binfmt-support schroot
debootstrap --arch=armhf --foreign --include=gcc,g++,python3.5-dev xenial <DIRECTORY>
cp /usr/bin/qemu-arm-static <DIRECTORY>/usr/bin
chroot <DIRECTORY>
/debootstrap/debootstrap --second-stage
echo "deb http://ports.ubuntu.com/ubuntu-ports xenial universe" >> /etc/apt/sources.list
echo "deb http://ports.ubuntu.com/ubuntu-ports xenial multiverse" >> /etc/apt/sources.list
apt-get update
apt-get install -y cython cython3
exit
cat <<END > /etc/schroot/chroot.d/xenial-armhf
[xenial-armhf]
description=Ubuntu xenial armhf
type=directory
directory=/home/xenial-armhf
groups=sbuild,root
root-groups=sbuild,root
users=root,<USER>
END
The environment should be accessible by
schroot -c chroot:xenial-armhf
and for root user session (the user must be in a group listed in root-groups) ,
schroot -c chroot:xenial-armhf -u root
After this, it is also possible to cross compile a cython module:
hello.pyx:
print("hello world")
compiling (python3.5-config --cflags and python3.5-config --libs in chroot for options, note -fPIC):
cython hello.pyx
arm-linux-gnueabihf-gcc --sysroot <DIRECTORY> -I/usr/include/python3.5m -I/usr/include/python3.5m -Wno-unused-result -Wsign-compare -g -fstack-protector-strong -Wformat -Werror=format-security -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -c hello.c
arm-linux-gnueabihf-gcc --shared --sysroot <DIRECTORY> -lpython3.5m -lpthread -ldl -lutil -lm hello.o -o hello.so
The module can be then tested
schroot -c chroot:xenial-armhf
python3
import hello
Cross compiling cython based python modules may also work. With setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import os
os.environ['CC'] = 'arm-linux-gnueabihf-gcc'
os.environ['LDSHARED'] = 'arm-linux-gnueabihf-gcc -shared'
sysroot_args=['--sysroot', '/path/to/xenial-armhf']
setup(cmdclass = {'build_ext': build_ext},
ext_modules= [ Extension("hello", ["hello.pyx"],
extra_compile_args=sysroot_args,
extra_link_args=sysroot_args) ])
Building a simple hello world module was possible this way. The file name for the module was wrong, in this case it was hello.cpython-35m-x86_64-linux-gnu.so. After renaming it as hello.so it was possible to import it.
Related
I am having problems installing Python 2.4.6 on CentOS 8.
I need the old Python 2.4.6 because I have some apps running which require Python 2.4.
I downloaded the Python 2.4.6.tgz package, extracted it and run "./configure" which works.
When I try to run "make", I see a lot of warnings and at the end the following error message is shown:
gcc -pthread -Xlinker -export-dynamic -o python \
Modules/python.o \
libpython2.4.a -lpthread -ldl -lutil -lm
libpython2.4.a(posixmodule.o): In function `posix_tmpnam':
/usr/local/src/Python-2.4.6/./Modules/posixmodule.c:6240: warning: the use of `tmpnam_r' is dangerous, better use `mkstemp'
libpython2.4.a(posixmodule.o): In function `posix_tempnam':
/usr/local/src/Python-2.4.6/./Modules/posixmodule.c:6195: warning: the use of `tempnam' is dangerous, better use `mkstemp'
case $MAKEFLAGS in \
*-s*) CC='gcc -pthread' LDSHARED='gcc -pthread -shared' OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ./python -E ./setup.py -q build;; \
*) CC='gcc -pthread' LDSHARED='gcc -pthread -shared' OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ./python -E ./setup.py build;; \
esac
/bin/sh: line 1: 5296 Segmentation fault (core dumped) CC='gcc -pthread' LDSHARED='gcc -pthread -shared' OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ./python -E ./setup.py build
make: *** [Makefile:342: sharedmods] Error 139
Any idea, what might be wrong here? If required, I can post the whole output after I run "make" (which is quite long).
Thanks a lot in advance.
I would recommend to go a different route than installing Python 2.4 on your OS. That thing is ancient, it can only bring security issues.
So if your apps don't work with even Python 2.7, perhaps you have better luck with running Python 2.4 in a Docker. See e.g. https://github.com/pantuza/docker-python-2.4.3 for a docker image with Python 2.4 and https://www.docker.com/get-started for an introduction on docker.
And probably the next step should be porting those apps to Python 3, such that you can get rid of that ancient Python version for good.
I have downloaded Mayan EDMS-Electronic Document Management System from GitHub and I configured project using Django server. I had added the required libraries based on requirement. Now the project runs with error
ocr.exceptions.OCRError: No OCR tool found
When I searched this error, I found Pyocr looks for the OCR tools (Tesseract, Cuneiform, etc) installed on your system and just tells you what it has found.
Then I tried to install tesseract using the command -->pip install tesseract-ocr.
I got this error
Requirement already satisfied: cython in ./venv2/lib/python2.7/site-packages (from tesseract-ocr) (0.28.4)
running bdist_wheel
running build
running build_py
file tesseract_ocr.py (for module tesseract_ocr) not found
file tesseract_ocr.py (for module tesseract_ocr) not found
running build_ext
building 'tesseract_ocr' extension
creating build
creating build/temp.linux-x86_64-2.7
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fdebug-prefix-map=/build/python2.7-l1RrwO/python2.7-2.7.14=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c tesseract_ocr.cpp -o build/temp.linux-x86_64-2.7/tesseract_ocr.o
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
tesseract_ocr.cpp:600:10: fatal error: leptonica/allheaders.h: No such file or directory
#include "leptonica/allheaders.h"
please help me to solve this issue. Thanks in advance.
Tesseract is installed on the OS using the apt-get command. The command you are using (PIP) is for installing Python packages, that is the reason for the error.
For reference: http://docs.mayan-edms.com/en/stable/topics/deploying.html#deploying
If using a Debian or Ubuntu based Linux distribution, get the executable requirements using:
sudo apt-get install g++ gcc ghostscript gnupg1 graphviz libjpeg-dev libmagic1 \
libpq-dev libpng-dev libreoffice libtiff-dev poppler-utils postgresql \
python-dev python-pip python-virtualenv redis-server sane-utils supervisor \
tesseract-ocr zlib1g-dev -y
Hit problem when convert python code to shared object by Cython.
setup file here:
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("hello.py")
)
So everything works fine on my Ubuntu desktop util transferred to CentOS.
Got error:
undefined symbol: PyUnicodeUCS4_DecodeUTF8
I googled and find there are many questions on this, but, almost all of them say the root cause is python with UCS2 or UCS4, and I understand this, didn't find one show the way to solve this.
IMO, ways to solve:
rebuild python to get the right version by "--enable-unicode=ucs4/ucs2"
But I need to reinstall all packages
Compile the code from another desktop whose python with the right UCS
Now, I wanna if there is way to set Cython to compile with specified UCS mode.
Any suggestions is great appreciated.
Thanks.
First, to answer your actual question:
I wanna if there is way to set Cython to compile with specified UCS mode.
You can build a separate python installation from source and link Cython against its headers. To find the headers, you can use the python-config tool (or python3-config for Python 3). It is usually located in the bin directory where the python executable is:
$ # system python on my machine (macos):
$ which python-config
/usr/bin/python-config
$ # python 3 installation
$ which python3-config
/Library/Frameworks/Python.framework/Versions/3.6/bin/python3-config
$ python-config --cflags
-I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE
$ python-config --ldflags
-L/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/config -lpython2.7 -ldl -framework CoreFoundation
Copy the output to the setup.py:
from setuptools import setup
from setuptools.extension import Extension
from Cython.Build import cythonize
cflags_ucs4 = [
'-I/Library/Frameworks/Python.framework/Versions/3.6/include/python3.6m',
'-I/Library/Frameworks/Python.framework/Versions/3.6/include/python3.6m',
...
]
ldflags_ucs4 = [
'-L/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/config-3.6m-darwin',
'-lpython3.6m',
...
]
cflags_ucs4 = [...]
ldflags_ucs2 = [...]
should_build_ucs2 = False # i.e. could be passed via sys.argv
if should_build_ucs2:
cflags = cflags_ucs2
ldflags = ldflags_ucs2
else:
cflags = cflags_ucs4
ldflags = ldflags_ucs4
extensions = [
Extension('hello.py', extra_compile_args=cflags, extra_link_args=ldflags),
]
setup(
ext_modules = cythonize(extensions)
)
However, I do not recommend doing that as you won't win anything by doing that - you will still need to build and distribute two separate packages (one for UCS2, another for UCS4) which is messy to maintain.
Instead, if you are building a wheel that should be installable on a wide range of Linux distros (what is most probably your actual goal), I would suggest to make your build compliable with PEP 513 (manylinux1 packages).I suggest you to read it through as it was very helpful for me when I faced the problem of distributing Linux-compliant wheels.
Now, one way to get a manylinux1-compliant wheel is to build the wheel on your machine, then running auditwheel to check for platform-specific issues and trying to resolve them:
$ pip install auditwheel
$ python setup.py bdist_wheel
$ # there should be now a mypkg-myver-cp36-cp36m-linux_x86_64.whl file in your dist directory
$ auditwheel show dist/mypkg-myver-cp36-cp36m-linux_x86_64.whl
$ # check what warnings auditwheel produced
$ # if there are warnings, try to repair them:
$ auditwheel repair dist/mypkg-myver-cp36-cp36m-linux_x86_64.whl
This should generate a wheel file named mypkg-myver-cp36-cp36m-manylinux1_x86_64.whl in a wheelhouse directory. Check again that everything is fine now by running auditwheel show wheelhouse/mypkg-myver-cp36-cp36m-manylinux1_x86_64.whl. If the wheel is now consistent with manylinux1, you can distribute it and it should work on most Linux distros (at least those with glibc; distros with musl like Alpine won't work, you will need to build a separate wheel if you want to support it).
What should you do if auditwheel can't repair your wheel? The best way is to pull a special docker container provided by PyPA for building manylinux1-compliant wheels (this is what I'm using myself):
$ docker pull https://quay.io/repository/pypa/manylinux1_x86_64
A wheel built inside this container will work on most of the Linux distros (excluding some exotic ones like Alpine).
I build libyaml and install it into a local area:
yaml-0.1.5 $ ./configure --prefix=/usr/local/sqlminus
yaml-0.1.5 $ make install
yaml-0.1.5 $ ls -l /usr/local/sqlminus/include/yaml.h
-rw-r--r--# 1 mh admin 54225 Jan 5 09:05 /usr/local/sqlminus/include/yaml.h
But when I build PyYAML, it cannot find yaml.h.
PyYAML-3.11 $ /usr/local/sqlminus/bin/python setup.py build
checking if libyaml is compilable
gcc -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall
-Wstrict-prototypes -I/usr/local/sqlminus/include/python2.7
-c build/temp.macosx-10.4-x86_64-2.7/check_libyaml.c
-o build/temp.macosx-10.4-x86_64-2.7/check_libyaml.o
build/temp.macosx-10.4-x86_64-2.7/check_libyaml.c:2:10:
fatal error: 'yaml.h'
file not found
#include <yaml.h>
^
1 error generated.
How can I tell PyYAML where I've installed libyaml?
(update) Based on dotslash's comment below, editing setup.cfg and adding these two lines made everything work smoothly.
include_dirs=/usr/local/sqlminus/include
library_dirs=/usr/local/sqlminus/lib
(end update)
I think you should install dependencies.
If you are using Ubuntu or Debian based system, you could search by this
apt-cache search libyaml
Then you may find there are some packages related.
I would suggest you try to install this: apt-get install libyaml-dev -y
If you are using Mac OS, you could change the source in file check_libyaml.c, tell it what the absolute path of yaml.h is.
Or just specify the path while compiling
python setup.py config --with-includepath=/path/to/your/install/of/python/includes/
Then go compiling.
More info can be found here.
Hope this be helpful.
Based on dotslash's comment, editing setup.cfg and adding these two lines made everything work smoothly:
include_dirs=/usr/local/sqlminus/include
library_dirs=/usr/local/sqlminus/lib
This project work fine on my local Ubuntu 12.04 and Mac OSX 10.10 (with fink python) machines. I can't seem to figure out how to configure the .travis.yml to get the .cpp files to build with g++-4.8 (4.9 or 5.x) would be fine too.
Project: https://github.com/schwehr/libais
My most recent failed attempt:
language: python
python:
- "2.7"
- "3.4"
before_install:
- sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
- sudo apt-get update -qq
- if [ "$CXX" = "g++" ]; then export CXX="g++-4.8" CC="gcc-4.8"; fi
install:
- sudo apt-get install -qq gcc-4.8 g++-4.8
- python setup.py install
script:
- python setup.py test
Gives:
gcc -pthread -fno-strict-aliasing -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/python/2.7.9/include/python2.7 -c src/libais/ais_py.cpp -o build/temp.linux-x86_64-2.7/src/libais/ais_py.o -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for Ada/C/ObjC but not for C++ [enabled by default]
cc1plus: error: unrecognized command line option ‘-std=c++11’
The key portion of my setup.py:
EXTRA_COMPILE_ARGS = []
if sys.platform in ('darwin', 'linux', 'linux2'):
EXTRA_COMPILE_ARGS = ['-std=c++11']
AIS_MODULE = Extension(
'_ais',
extra_compile_args=EXTRA_COMPILE_ARGS,
Thanks Dominic. I tried printing things and that was helpful. That got me thinking that I could just get explicit and force python to use the correct compiler. That makes it easier to see what is happening.
install:
- sudo apt-get install -qq gcc-4.8 g++-4.8
- CC=g++-4.8 python setup.py install
Which works.