I'm trying to compile Python (version 3.1.3) for ARM, following this guide.
These are the commands I am issuing (on Ubuntu 12):
CC=arm-linux-gnueabi-gcc CXX=arm-linux-gnueabi-g++ AR=arm-linux-gnueabi-ar RANLIB=arm-linux-gnueabi-ranlib ./configure --host --build=x86_64-linux-gnu --prefix=/python
make HOSTPYTHON=./hostpython HOSTPGEN=./Parser/hostpgen BLDSHARED="arm-linux-gnueabi-gcc -shared" CROSS_COMPILE=arm-linux-gnueabi- CROSS_COMPILE_TARGET=yes HOSTARCH=x86_64-linux-gnu BUILDARCH=x86_64-linux-gnu
make install HOSTPYTHON=./hostpython BLDSHARED="arm-linux-gnueabi-gcc -shared" CROSS_COMPILE=arm-linux-gnueabi- CROSS_COMPILE_TARGET=yes prefix=~/Python-2.7.2/_install
A few things to notice.
When executing the first command, if --host is set to arm-linux, the command won't execute, telling me that I should use '--host' for cross-compiling. This is why I did not set it to anything.
When running the second line, I get
configure: WARNING: Cache variable ac_cv_host contains a newline.
Failed to configure _ctypes module
Python build finished, but the necessary bits to build these modules
were not found:
_curses _curses_panel _dbm
_gdbm _hashlib _sqlite3
_ssl bz2 ossaudiodev readline zlib To find the necessary bits, look
in setup.py in detect_modules() for the module's name.
Failed to build these modules:
_tkinter
I get a similar error when running the third line, but I guess it's due to the fact that the command above did not work.
I'm trying to see if anyone can help me fix it.
It's much easier to compile natively under QEMU than cross-compile.
Unpack an arm chroot from whichever project you like, e.g. arch linux arm, raspbian, etc.
You already get binary python for arm, but if you really want to compile your own:
Download qemu-user-static (e.g. debian package), unpack that.
Install that single static binary into root of your arm chroot.
Add magic hex to binfmt in proc. Instructions for Debian, Gentoo, genric, List of magic hex sequences. Below are my settings:
mount -t binfmt_misc none /proc/sys/fs/binfmt_misc
echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/qemu-arm-static:' > /proc/sys/fs/binfmt_misc/register
export QEMU_CPU=arm926
Optionally, mount --bind /tmp, /proc, /sys, as required.
Enjoy your virtual arm!
I got the same error and just ignored it and carried on with the procedure suggested by
http://randomsplat.com/id5-cross-compiling-python-for-embedded-linux.html
It worked with a hello_world program. You can also run a testall.py file from the _install/lib/Python2.7/ folder.
You can also refer to
http://whatschrisdoing.com/blog/talks/PyConTalk2012.pdf
Related
I can't get otherwise-available modules seen by a compiled python script. How do I need to change the below process in order to accept either venv-based or global modules?
Steps:
$ python3 -m venv sometest
$ cd sometest
$ . bin/activate
(sometest) $ pip3 install PyCrypto Cython
The basic script, using a non-standard module Crypto:
# hello.py
from Crypto.Cipher import AES
import base64
obj = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')
msg = "The answer is no"
ciphertext = obj.encrypt(msg)
print(msg)
print(base64.b64encode(ciphertext))
(sometest) $ python3 hello.py
The answer is no
b'1oONZCFWVJKqYEEF4JuL8Q=='
Compiling it:
(sometest) $ cython -3 --embed hello.py
(sometest) $ gcc -Os -I /usr/include/python3.5m -o hello hello.c -lpython3.5m -lpthread -lm -lutil -ldl
(sometest) $ $ ./hello
Traceback (most recent call last):
File "hello.py", line 1, in init hello
from Crypto.Cipher import AES
ImportError: No module named 'Crypto'
I don't think it's a problem with using the venv from a cython-embedded-compiled script: the script works elsewhere in the system without the venv (that is, python3 -c 'from Crypto.Cipher import AES' does not fail).
The process works fine otherwise:
(sometest) $ echo 'print("hello world")' > hello2.py
(sometest) $ cython -3 --embed hello2.py
(sometest) $ gcc -Os -I /usr/include/python3.5m -o hello2 hello2.c -lpython3.5m -lpthread -lm -lutil -ldl
(sometest) $ ./hello2
hello world
System:
(sometest) $ python3 --version
Python 3.5.2
(sometest) $ pip3 freeze
Cython==0.29.11
pkg-resources==0.0.0
pycrypto==2.6.1
(sometest) $ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
Usually, a Python-interpreter isn't "standalone" and in order to work it needs its standard libraries (for example ctypes (compiled) or site.py (interpreted)) and also path to other site-packages (for example numpy) must be set.
Albeit it is possible to make a Python-interpter fully standalone by freezing the py-modules and merging all c-extensions (see for example this SO-post) into the resulting executable, it is easier to provide the needed installation to the embeded interpeter. One can download files needed for a "standard" installation from python-homepage (at least for windows), see also this SO-question).
Sometimes finding standard modules/site packages doesn't work out of the box: one has to help the interpreter by setting Python-path, i.e. by adding <..>/sometest/lib/python3.5/site-packages (sometest being a virtual environment root-folder) to sys.path either programmatically in the pyx-file or by setting PYTHONPATH-environment variable prior to start.
Read on for more gory details and alternative solutions.
This answer is for Linux and Python3 (Python 3.7), the basic idea is the same for Windows/MacOS, but some details might be different.
Because venv is used we have the following alternative to solve the issue:
adding <..>/sometest/lib/python3.5/site-packages (sometest being a virtual environment root-folder) to sys.path either programmatically in the pyx-file or by setting PYTHONPATH-environment variable prior to start.
placing the executable with embeded python in a subdirectory of sometest (e.g. bin or creating an own).
using virtualenv instead of venv.
Note: For the executable with the embeded python, it doesn't play any role whether the virtual environment (or which) is activated or not.
Why does the above solves the issue in your scenario?
The problem is, that the (embeded) Python-interpreter needs to figure out where following things are:
platform independent directory/files, e.g. os.py, argparse.py (mostly everything *.py/ *.pyc). Given sys.prefix, the interpreter can figure out where to find them (i.e. in prefix/lib/pythonX.Y).
platform dependent directory/files, e.g. shared libraries. Given sys.exec_prefix the interpreter can figure out where to find them (e.g. shared libraries can be found in in exec_prefix/lib/pythonX.Y/lib-dynload).
The algorithm can be found here and the search is performed, when Py_Initialize is executed. Once these directories are found, sys.path can be constructed.
However, when using venv, there is a pyvenv.cfg-file next to exe or in the parent directory, which ensures that the right Python-Home is found - a good starting point is the home-key in this file.
If Py_NoSiteFlag is not set, Py_Initialize will utilize site.py (it can be found by the interpreter, because sys.prefix is known) , or more precise site.main(), to add site-packages of the virtual environment to sys.path. While doing so, site.py looks for pyvenv.cfg and parses it. However, local site-packages are added to the python-path only when:
If a file named "pyvenv.cfg" exists one directory above
sys.executable, sys.prefix and sys.exec_prefix are set to that
directory and it is also checked for site-packages (sys.base_prefix
and sys.base_exec_prefix will always be the "real" prefixes of the
Python installation).
In your case pyvenv.cfg is not in the directory above, but in the same as the exe - thus the local site-packages, where the libraries were installed via pip, aren't included. Global site-packages aren't included because pyvenv.cfg has key include-system-site-packages = false. Thus there are no site-packages allowed and the installed libraries cannot be found.
However, moving the exe one directory down, would lead to inclusion of the local site-packages to the path.
There are other scenarios possible, what counts is the location of the executable and not which environment is activated.
A: Executable is somewhere, but not inside a virtual environment
This search heuristic works more or less reliable for installed python-interpreters, but can fall for embeded-interpreters or virtual environments (see this issue for much more information).
If python was installed using usual apt install or similar, then it will be found (due to 4. step in the search algorithm) and the system-installation will be used by the embeded interpreter.
However if files were moved around or python was build from source but not installed, then embeded interperter cannot start up:
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: initfsencoding: unable to load the file system codec
ModuleNotFoundError: No module named 'encodings'
In this case, Py_SetPythonHome or setting environment variable $PYTHONHOME are possible solutions.
B: Executable inside a virtual environment, created with virtualenv
Assuming it is the same Python version for virtual environment and the embeded python (otherwise we have the above case), the emebeded exe will use local side-packages. The home search algorithmus will always find the local home, due to this rule:
Step 3. Try to find prefix and exec_prefix relative to argv0_path, backtracking up the path until it is exhausted. This is the most common step to succeed. Note that if prefix and exec_prefix are
different, exec_prefix is more likely to be found; however if
exec_prefix is a subdirectory of prefix, both will be found.
In this case argv0_path is the path to the exe (there is no pyvenv.cfg file!), and the "landmarks" (lib/python$VERSION/os.py and lib/python$VERSION/lib-dynload) will be found, because they are presented as symlinks in the local-home above the exe.
C: Executable two folders deep inside a venv-environment
Going two and not one folder (where it works) down in a venv-environment results in case A: pyvenv.cfg file isn't read while searching for home (too far above), 'venv`-environments lack symlinks to "landmarkers" (localy only side-packages are present) and such step 3 will fail, with 4. step being the only hope.
Corollary: Embeded Python will not work without a right Python-installation, unless among other possibilities:
the needed files are packed into lib\pythonX.Y\* next to the embeding executable or somewhere above (and there is no pyvenv.cfg around to mess the search up).
or pyvenv.cfg used to point the interpreter to the right location.
I need to try python 3.7 with openssl-1.1.1 in Ubuntu 16.04. Both python and openssl versions are pre-release. Following instructions on how to statistically link openssl to python in a previous post, I downloaded the source for opnssl-1.1.1.
Then navigate to the source code for openssl and execute:
./config
sudo make
sudo make install
Then, edit Modules/Setup.dist to uncomment the following lines:
SSL=/usr/local/ssl
_ssl _ssl.c \
-DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \
-L$(SSL)/lib -lssl -lcrypto
Then download python 3.7 source code. Then, navigate inside the source code and execute:
./configure
make
make install
After I execute make install I got this error at the end of the terminal output:
./python: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
generate-posix-vars failed
Makefile:596: recipe for target 'pybuilddir.txt' failed
make: *** [pybuilddir.txt] Error 1
I could not figure out what is the problem and what I need to do.
This has (should have) nothing to do with Python or OpenSSL versions.
Python build process, includes some steps when the newly built interpreter is launched, and attempts to load some of the newly built modules - including extension modules (which are written in C and are actually shared objects (.sos)).
When an .so is loaded, the loader must find (recursively) all the .so files that the .so needs (depends on), otherwise it won't be able to load it.
Python has some modules (e.g. _ssl*.so, _hashlib*.so) that depend on OpenSSL libs. Since you built yours against OpenSSL1.1.1 (the lib names differ from what comes by default on the system: typically 1.0.*), the loader won't be able to use the default ones.
What you need to do, is instruct the loader (check [Man7]: LD.SO(8) for more details) where to look for "your" OpenSSL libs (which are located under /usr/local/ssl/lib). One way of doing that is adding their path in ${LD_LIBRARY_PATH} env var (before building Python):
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/ssl/lib
./configure
make
make install
You might also want to take a look at [Python.Docs]: Configure Python - Libraries options (--with-openssl, --with-openssl-rpath).
Check [SO]: How to enable FIPS mode for libcrypto and libssl packaged with Python? (#CristiFati's answer) for details on a wider problem (remotely) related to yours.
What I have done to fix this :
./configure --with-ssl=./libssl --prefix=/subsystem
sed -i 's!^RUNSHARED=!RUNSHARED=LD_LIBRARY_PATH=/path/to/own/libssl/lib!' Makefile
make
make install
Setting LD_LIBRARY_PATH with export was not sufficient
With Python-3.6.5 and openssl-1.1.0h i get stuck in the same problem. I have uncomment _socket socketmodule.c.
I am trying to install pdfMiner to work with CollectiveAccess. My host (pair.com) has given me the following information to help in this quest:
When compiling, it will likely be necessary to instruct the
installation to use your account space above, and not try to install
into the operating system directories. Typically, using "--
home=/usr/home/username/pdfminer" at the end of the install command
should allow for that.
I followed this instruction when trying to install.
The result was:
running install
running build
running build_py
running build_scripts
running install_lib
running install_scripts
changing mode of /usr/home/username/pdfminer/bin/latin2ascii.py to 755
changing mode of /usr/home/username/pdfminer/bin/pdf2txt.py to 755
changing mode of /usr/home/username/pdfminer/bin/dumppdf.py to 755
running install_egg_info
Removing /usr/home/username/pdfminer/lib/python/pdfminer-20140328.egg-info
Writing /usr/home/username/pdfminer/lib/python/pdfminer-20140328.egg-info
I don't see anything wrong with that (I'm very new to python), but when I try to run the sample command $ pdf2txt.py samples/simple1.pdf I get this error:
Traceback (most recent call last): File "pdf2txt.py", line 3, in <module>
from pdfminer.pdfdocument import PDFDocument ImportError: No module named pdfminer.pdfdocument
I'm running python 2.7.3. I can't install from root (shared hosting). The most recent version of pdfminer, which is 2014/03/28.
I've seen some posts on similar issues ("no module named. . . " but nothing exactly the same. The proposed solutions either don't help (such as installing with sudo - not an option; specifying the path for python (which doesn't seem to be the issue), etc.).
Or is this a question for my host? (i.e., something amiss or different about their setup)
I had an error like this:
No module named 'pdfminer.pdfinterp'; 'pdfminer' is not a package
My problem was that I had named my script pdfminer.py which for the reasons that I don't know, Python took it for the original pdfminer package files and tried to compiled it.
I renamed my script to something else, deleted all the *.pyc file and __pycache__ directory and my problem was solved.
use this command worked for me and removed the error
pip install pdfminer.six
Since the package pdfminer is installed to a non-standard/non-default location, Python won't be be able to find it. In order to use it, you will need to add it to your 'pythonpath'. Three ways:
At run time, put this in your script pdf2txt.py:
import sys
# if there are no conflicting packages in the default Python Libs =>
sys.path.append("/usr/home/username/pdfminer")
or
import sys
# to always use your package lib before the system's =>
sys.path.insert(1, "/usr/home/username/pdfminer")
Note: The install path specified with --home is used as the Lib for all packages which you might want to install, not just this one. You should delete that folder and re-install with --
home=/usr/home/username/myPyLibs (or any generic name) so that when you install other packages with that install path, you would only need the one path to add to your local Lib to be able to import them:
import sys
sys.path.insert(1, "/usr/home/username/myPyLibs")
Add it to PYTHONPATH before executing your script:
export PYTHONPATH="${PYTHONPATH}:/usr/home/username/myPyLibs"
And then put that in your ~/.bashrc file (/usr/home/username/.bashrc) or .profile as applicable. This may not work for programs which are not executed from the console.
Create a VirtualEnv and install the packages you need to that.
I have a virtual environment and I had to activate it before I did a pip3 install to have the venv see it.
source ~/venv/bin/activate
I want to compile my python code to binary by using pyinstaller, but the hidden import block me. For example, the following code import psutil and print the CPU count:
# example.py
import psutil
print psutil.cpu_count()
And I compile the code:
$ pyinstaller -F example.py --hidden-import=psutil
When I run the output under dist:
ImportError: cannot import name _psutil_linux
Then I tried:
$ pyinstaller -F example.py --hidden-import=_psutil_linux
Still the same error. I have read the pyinstall manual, but I still don't know how to use the hidden import. Is there a detailed example for this? Or at least a example to compile and run my example.py?
ENVs:
OS: Ubuntu 14.04
Python: 2.7.6
pyinstaller: 2.1
Hi hope you're still looking for an answer. Here is how I solved it:
add a file called hook-psutil.py
from PyInstaller.hooks.hookutils import (collect_data_files, collect_submodules)
datas = [('./venv/lib/python2.7/site-packages/psutil/_psutil_linux.so', 'psutil'),
('./venv/lib/python2.7/site-packages/psutil/_psutil_posix.so', 'psutil')]
hiddenimports = collect_submodules('psutil')
And then call pyinstaller --additional-hooks-dir=(the dir contain the above script) script.py
pyinstall is hard to configure, the cx_freeze maybe better, both support windows (you can download the exe directly) and linux. Provide the example.py, In windows, suppose you have install python in the default path (C:\\Python27):
$ python c:\\Python27\\Scripts\\cxfreeze example.py -s --target-dir some_path
the cxfreeze is a python script, you should run it with python, then the build files are under some_path (with a lot of xxx.pyd and xxx.dll).
In Linux, just run:
$ cxfreeze example.py -s --target-dir some_path
and also output a lot of files(xxx.so) under some_path.
The defect of cx_freeze is it would not wrap all libraries to target dir, this means you have to test your build under different environments. If any library missing, just copy them to target dir. A exception case is, for example, if your build your python under Centos 6, but when running under Centos 7, the missing of libc.so.6 will throw, you should compile your python both under Centos 7 and Centos 6.
What worked for me is as follows:
Install python-psutil: sudo apt-get install python-psutil. If you
have a previous installation of the psutil module from other
method, for example through source or easy_install, remove it first.
Run pyinstaller as you do, without the hidden-import option.
still facing the error
Implementation:
1.python program with modules like platform , os , shutil and psutil
when i run the script directly using python its working fine.
2.if i build a binary using pyinstaller. The binary is build successfully. But if i run the binary iam getting the No module named psutil found.I had tried several methods like adding the hidden import and other things. None is working. I trying it almost 2 to 3 days.
Error:
ModuleNotFoundError: No module named 'psutil'
Command used for the creating binary
pyinstaller --hidden-import=['_psutil_linux'] --onefile --clean serverHW.py
i tried --additional-hooks-dir= also not working. When i run the binary im getting module not found error.
I've tried to resolve this prolem for about 3days, and I'd finally felt that I need to ask for help by creating my own question.
I have Windows 7x64 and Qt4.8.6 installed.
I need Python with PyQt and Qscintilla2 to be installed and working.
Now I wil describe my last actions. I did everything like included packages instructions said.
1) Installed Python2.7.9 32bit from official website.
2) Downloaded SIP from here (dev snapshot), then:
configure.py —platform win32-g++
mingw32-make
mingw32-make install
3) Downloaded PyQt from here (not the installer but dev snapshot, cause I need to build with MinGW and istaller producec MSVC version), then:
configure-ng.py -spec win32-g++
mingw32-make
mingw32-make install
Ater these steps I tested PyQt on my project - everything works fine.
Then I starded trying to install Qsnitilla2.
4) Downloaded Qsnitilla2 from here (dev snapshot), then:
a) in Qt4Qt5 folder:
qmake qscintilla.pro -spec win32-g++
mingw32-make
mingw32-make install
This had installed Qsnitilla2 in Qt4.8.6 as I saw;
b) in Python folder( F..ing Python bindngs, excuse my french):
config.py —spec win32-g++
mingw32-make
after this I got ld.exe error (linking error):
Then, afted doing some research, I manually edited my Makefile.Release (by adding -lpython27 to LIBS parameter):
LIBS = -L"c:\Qt-mingw\4.8.6\lib" -LC:\Python27\libs -LC:\Qt-mingw\4.8.6\lib -lqscintilla2 -lQtGui4 -lQtCore4 -lpython27
After this, my mingw32-make completed succesfully. So:
mingw32-make install
This had installed Qscintilla2 Python bindings.
Now I can see Qsci autocomlplete in Eclipse.
So i've tried this:
from PyQt4.Qsci import QsciScintilla
And i've got this in traceback:
from PyQt4.Qsci import QsciScintilla
ImportError: DLL load failed: Не найден указанный модуль
(Translation: The specified module could not be found)
I've tried this with both dev snapshot and src packages from Riverbank website. And also with MinGW 4.8.1 and MinGW-w64 4.8.4. I can't use MinGW-w64 over 4.8 version cause I need boost-1.55 and it only supports MinGW 4.8.
I don't know what to do now, but I really want to use Scintilla in my project. So i'll be very gratefull for any suggestions.
Have you ever tried to load the QsciScintilla right from the console? I mean you need to enter the directory where the QScintilla located( this means current folder is the default folder), then try run the command "from PyQt4.Qsci import QsciScintilla", if this load module failure still happens, this possibly means you need extra dynamic which QScintilla depends, you need to use dll dependency to find out if some other libraries were missing, then put the missing libraries into the same folder of QsciScintilla.