I can't get otherwise-available modules seen by a compiled python script. How do I need to change the below process in order to accept either venv-based or global modules?
Steps:
$ python3 -m venv sometest
$ cd sometest
$ . bin/activate
(sometest) $ pip3 install PyCrypto Cython
The basic script, using a non-standard module Crypto:
# hello.py
from Crypto.Cipher import AES
import base64
obj = AES.new('This is a key123', AES.MODE_CBC, 'This is an IV456')
msg = "The answer is no"
ciphertext = obj.encrypt(msg)
print(msg)
print(base64.b64encode(ciphertext))
(sometest) $ python3 hello.py
The answer is no
b'1oONZCFWVJKqYEEF4JuL8Q=='
Compiling it:
(sometest) $ cython -3 --embed hello.py
(sometest) $ gcc -Os -I /usr/include/python3.5m -o hello hello.c -lpython3.5m -lpthread -lm -lutil -ldl
(sometest) $ $ ./hello
Traceback (most recent call last):
File "hello.py", line 1, in init hello
from Crypto.Cipher import AES
ImportError: No module named 'Crypto'
I don't think it's a problem with using the venv from a cython-embedded-compiled script: the script works elsewhere in the system without the venv (that is, python3 -c 'from Crypto.Cipher import AES' does not fail).
The process works fine otherwise:
(sometest) $ echo 'print("hello world")' > hello2.py
(sometest) $ cython -3 --embed hello2.py
(sometest) $ gcc -Os -I /usr/include/python3.5m -o hello2 hello2.c -lpython3.5m -lpthread -lm -lutil -ldl
(sometest) $ ./hello2
hello world
System:
(sometest) $ python3 --version
Python 3.5.2
(sometest) $ pip3 freeze
Cython==0.29.11
pkg-resources==0.0.0
pycrypto==2.6.1
(sometest) $ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
Usually, a Python-interpreter isn't "standalone" and in order to work it needs its standard libraries (for example ctypes (compiled) or site.py (interpreted)) and also path to other site-packages (for example numpy) must be set.
Albeit it is possible to make a Python-interpter fully standalone by freezing the py-modules and merging all c-extensions (see for example this SO-post) into the resulting executable, it is easier to provide the needed installation to the embeded interpeter. One can download files needed for a "standard" installation from python-homepage (at least for windows), see also this SO-question).
Sometimes finding standard modules/site packages doesn't work out of the box: one has to help the interpreter by setting Python-path, i.e. by adding <..>/sometest/lib/python3.5/site-packages (sometest being a virtual environment root-folder) to sys.path either programmatically in the pyx-file or by setting PYTHONPATH-environment variable prior to start.
Read on for more gory details and alternative solutions.
This answer is for Linux and Python3 (Python 3.7), the basic idea is the same for Windows/MacOS, but some details might be different.
Because venv is used we have the following alternative to solve the issue:
adding <..>/sometest/lib/python3.5/site-packages (sometest being a virtual environment root-folder) to sys.path either programmatically in the pyx-file or by setting PYTHONPATH-environment variable prior to start.
placing the executable with embeded python in a subdirectory of sometest (e.g. bin or creating an own).
using virtualenv instead of venv.
Note: For the executable with the embeded python, it doesn't play any role whether the virtual environment (or which) is activated or not.
Why does the above solves the issue in your scenario?
The problem is, that the (embeded) Python-interpreter needs to figure out where following things are:
platform independent directory/files, e.g. os.py, argparse.py (mostly everything *.py/ *.pyc). Given sys.prefix, the interpreter can figure out where to find them (i.e. in prefix/lib/pythonX.Y).
platform dependent directory/files, e.g. shared libraries. Given sys.exec_prefix the interpreter can figure out where to find them (e.g. shared libraries can be found in in exec_prefix/lib/pythonX.Y/lib-dynload).
The algorithm can be found here and the search is performed, when Py_Initialize is executed. Once these directories are found, sys.path can be constructed.
However, when using venv, there is a pyvenv.cfg-file next to exe or in the parent directory, which ensures that the right Python-Home is found - a good starting point is the home-key in this file.
If Py_NoSiteFlag is not set, Py_Initialize will utilize site.py (it can be found by the interpreter, because sys.prefix is known) , or more precise site.main(), to add site-packages of the virtual environment to sys.path. While doing so, site.py looks for pyvenv.cfg and parses it. However, local site-packages are added to the python-path only when:
If a file named "pyvenv.cfg" exists one directory above
sys.executable, sys.prefix and sys.exec_prefix are set to that
directory and it is also checked for site-packages (sys.base_prefix
and sys.base_exec_prefix will always be the "real" prefixes of the
Python installation).
In your case pyvenv.cfg is not in the directory above, but in the same as the exe - thus the local site-packages, where the libraries were installed via pip, aren't included. Global site-packages aren't included because pyvenv.cfg has key include-system-site-packages = false. Thus there are no site-packages allowed and the installed libraries cannot be found.
However, moving the exe one directory down, would lead to inclusion of the local site-packages to the path.
There are other scenarios possible, what counts is the location of the executable and not which environment is activated.
A: Executable is somewhere, but not inside a virtual environment
This search heuristic works more or less reliable for installed python-interpreters, but can fall for embeded-interpreters or virtual environments (see this issue for much more information).
If python was installed using usual apt install or similar, then it will be found (due to 4. step in the search algorithm) and the system-installation will be used by the embeded interpreter.
However if files were moved around or python was build from source but not installed, then embeded interperter cannot start up:
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: initfsencoding: unable to load the file system codec
ModuleNotFoundError: No module named 'encodings'
In this case, Py_SetPythonHome or setting environment variable $PYTHONHOME are possible solutions.
B: Executable inside a virtual environment, created with virtualenv
Assuming it is the same Python version for virtual environment and the embeded python (otherwise we have the above case), the emebeded exe will use local side-packages. The home search algorithmus will always find the local home, due to this rule:
Step 3. Try to find prefix and exec_prefix relative to argv0_path, backtracking up the path until it is exhausted. This is the most common step to succeed. Note that if prefix and exec_prefix are
different, exec_prefix is more likely to be found; however if
exec_prefix is a subdirectory of prefix, both will be found.
In this case argv0_path is the path to the exe (there is no pyvenv.cfg file!), and the "landmarks" (lib/python$VERSION/os.py and lib/python$VERSION/lib-dynload) will be found, because they are presented as symlinks in the local-home above the exe.
C: Executable two folders deep inside a venv-environment
Going two and not one folder (where it works) down in a venv-environment results in case A: pyvenv.cfg file isn't read while searching for home (too far above), 'venv`-environments lack symlinks to "landmarkers" (localy only side-packages are present) and such step 3 will fail, with 4. step being the only hope.
Corollary: Embeded Python will not work without a right Python-installation, unless among other possibilities:
the needed files are packed into lib\pythonX.Y\* next to the embeding executable or somewhere above (and there is no pyvenv.cfg around to mess the search up).
or pyvenv.cfg used to point the interpreter to the right location.
Related
I need to try python 3.7 with openssl-1.1.1 in Ubuntu 16.04. Both python and openssl versions are pre-release. Following instructions on how to statistically link openssl to python in a previous post, I downloaded the source for opnssl-1.1.1.
Then navigate to the source code for openssl and execute:
./config
sudo make
sudo make install
Then, edit Modules/Setup.dist to uncomment the following lines:
SSL=/usr/local/ssl
_ssl _ssl.c \
-DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \
-L$(SSL)/lib -lssl -lcrypto
Then download python 3.7 source code. Then, navigate inside the source code and execute:
./configure
make
make install
After I execute make install I got this error at the end of the terminal output:
./python: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
generate-posix-vars failed
Makefile:596: recipe for target 'pybuilddir.txt' failed
make: *** [pybuilddir.txt] Error 1
I could not figure out what is the problem and what I need to do.
This has (should have) nothing to do with Python or OpenSSL versions.
Python build process, includes some steps when the newly built interpreter is launched, and attempts to load some of the newly built modules - including extension modules (which are written in C and are actually shared objects (.sos)).
When an .so is loaded, the loader must find (recursively) all the .so files that the .so needs (depends on), otherwise it won't be able to load it.
Python has some modules (e.g. _ssl*.so, _hashlib*.so) that depend on OpenSSL libs. Since you built yours against OpenSSL1.1.1 (the lib names differ from what comes by default on the system: typically 1.0.*), the loader won't be able to use the default ones.
What you need to do, is instruct the loader (check [Man7]: LD.SO(8) for more details) where to look for "your" OpenSSL libs (which are located under /usr/local/ssl/lib). One way of doing that is adding their path in ${LD_LIBRARY_PATH} env var (before building Python):
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/ssl/lib
./configure
make
make install
You might also want to take a look at [Python.Docs]: Configure Python - Libraries options (--with-openssl, --with-openssl-rpath).
Check [SO]: How to enable FIPS mode for libcrypto and libssl packaged with Python? (#CristiFati's answer) for details on a wider problem (remotely) related to yours.
What I have done to fix this :
./configure --with-ssl=./libssl --prefix=/subsystem
sed -i 's!^RUNSHARED=!RUNSHARED=LD_LIBRARY_PATH=/path/to/own/libssl/lib!' Makefile
make
make install
Setting LD_LIBRARY_PATH with export was not sufficient
With Python-3.6.5 and openssl-1.1.0h i get stuck in the same problem. I have uncomment _socket socketmodule.c.
Is there any platform independent way to get the path to a python installation's libpython, for use as a cmake argument? sysconfig.get_config_var gives some pieces but there's no consistent way I can get this working.
On OSX:
No variable contains the actual basename of the library (libpython2.7.dylib)
sysconfig.get_config_var('LIBDIR') returns the directory libpython2.7.dylib is in (/usr/local/opt/python/Frameworks/Python.framework/Versions/2.7/lib)
edit: There seems to be a bit of a discrepancy in when sysconfig is reporting a symlink and when it's reporting a real path. LIBDIR does in fact contain libpython.2.7.dylib, but what I just noticed is that this is a symlink to /usr/local/Cellar/python/2.7.13_1/Frameworks/Python.framework/Versions/2.7/Python.
INSTSONAME points to Python.framework/Versions/2.7/Python, the basename of that path, but no variable I can find points to the parent part of that path.
On Ubuntu:
sysconfig.get_config_var('INSTSONAME') gives me the name of the library (libpython2.7.so.1.0).
No variable contains the directory it's in (/usr/lib/x86_64-linux-gnu/).
LIBDIR returns only /usr/lib
CMake python modules: FindPythonLibs and FindPythonInterp.
If those don't work for you just set the next vars when calling cmake -DPYTHON_EXECUTABLE:FILEPATH, -DPYTHON_LIBRARY:FILEPATH and -DPYTHON_INCLUDE_DIR:FILEPAT, for more info look here.
Example:
cmake .. -G "Sublime Text 2 - Ninja"
-DCMAKE_BUILD_TYPE=Release
-DPYTHON_EXECUTABLE:FILEPATH=d:\virtualenvs\python362_32
-DPYTHON_LIBRARY:FILEPATH=d:\virtualenvs\python362_32\libs\python36.lib
-DPYTHON_INCLUDE_DIR:FILEPAT=d:\virtualenvs\python362_32\include
I'm getting an error while trying to install FEnicS on Mac OS X 10.11.6. I've read the responses to similar questions on this website, and have tried the suggested solutions, but I must be doing something wrong.
On running the command:
curl -s https://fenicsproject.org/fenics-install.sh | bash
I get an error while the cython package is being installed:
[cython] Building cython/e2t4ieqlgjl3, follow log with:
[cython] tail -f /Users/sophiaw/.hashdist/tmp/cython-e2t4ieqlgjl3-1/_hashdist/build.log
[cython|ERROR] Command '[u'/bin/bash', '_hashdist/build.sh']' returned non-zero exit status 1
[cython|ERROR] command failed (code=1); raising.
The message from build.log is:
Checking .pth file support in
/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages/
/Users/sophiaw/.hashdist/bld/python/pf77qttkbtzn/bin/python -E -c pass
TEST FAILED:
/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages/
does NOT support .pth files error: bad install directory or
PYTHONPATH
You are attempting to install a package to a directory that is not on
PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages/
and your PYTHONPATH environment variable currently contains:
'/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/Python.framework/Versions/2.7/lib/python2.7/site-packages:'
Here are some of your options for correcting the problem:
You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files
You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python
and want to use the package(s) you are installing.)
You can set up the installation directory to support ".pth" files by using one of the approaches described here:
https://pythonhosted.org/setuptools/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
I've tried adding this to the bash_profile, but get the same error:
export PYTHONPATH=/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages:$PYTHONPATH.
How can I fix this error?
This was resolved by the fenics support group. to install FEniCS on OS X, Docker is a more convenient option.
I'm trying to compile Python (version 3.1.3) for ARM, following this guide.
These are the commands I am issuing (on Ubuntu 12):
CC=arm-linux-gnueabi-gcc CXX=arm-linux-gnueabi-g++ AR=arm-linux-gnueabi-ar RANLIB=arm-linux-gnueabi-ranlib ./configure --host --build=x86_64-linux-gnu --prefix=/python
make HOSTPYTHON=./hostpython HOSTPGEN=./Parser/hostpgen BLDSHARED="arm-linux-gnueabi-gcc -shared" CROSS_COMPILE=arm-linux-gnueabi- CROSS_COMPILE_TARGET=yes HOSTARCH=x86_64-linux-gnu BUILDARCH=x86_64-linux-gnu
make install HOSTPYTHON=./hostpython BLDSHARED="arm-linux-gnueabi-gcc -shared" CROSS_COMPILE=arm-linux-gnueabi- CROSS_COMPILE_TARGET=yes prefix=~/Python-2.7.2/_install
A few things to notice.
When executing the first command, if --host is set to arm-linux, the command won't execute, telling me that I should use '--host' for cross-compiling. This is why I did not set it to anything.
When running the second line, I get
configure: WARNING: Cache variable ac_cv_host contains a newline.
Failed to configure _ctypes module
Python build finished, but the necessary bits to build these modules
were not found:
_curses _curses_panel _dbm
_gdbm _hashlib _sqlite3
_ssl bz2 ossaudiodev readline zlib To find the necessary bits, look
in setup.py in detect_modules() for the module's name.
Failed to build these modules:
_tkinter
I get a similar error when running the third line, but I guess it's due to the fact that the command above did not work.
I'm trying to see if anyone can help me fix it.
It's much easier to compile natively under QEMU than cross-compile.
Unpack an arm chroot from whichever project you like, e.g. arch linux arm, raspbian, etc.
You already get binary python for arm, but if you really want to compile your own:
Download qemu-user-static (e.g. debian package), unpack that.
Install that single static binary into root of your arm chroot.
Add magic hex to binfmt in proc. Instructions for Debian, Gentoo, genric, List of magic hex sequences. Below are my settings:
mount -t binfmt_misc none /proc/sys/fs/binfmt_misc
echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/qemu-arm-static:' > /proc/sys/fs/binfmt_misc/register
export QEMU_CPU=arm926
Optionally, mount --bind /tmp, /proc, /sys, as required.
Enjoy your virtual arm!
I got the same error and just ignored it and carried on with the procedure suggested by
http://randomsplat.com/id5-cross-compiling-python-for-embedded-linux.html
It worked with a hello_world program. You can also run a testall.py file from the _install/lib/Python2.7/ folder.
You can also refer to
http://whatschrisdoing.com/blog/talks/PyConTalk2012.pdf
I'm working on several Python projects who run on various versions of Python. I'm hoping to set up my vim environment to use ropevim, pyflakes, and pylint but I've run into some issues caused by using a single vim (compiled for a specific version of Python which doesn't match the project's Python version).
I'm hoping to build vim into each of my virtualenv directories but I've run into an issue and I can't get it to work. When I try to build vim from source, despite specifying the Python config folder in my virtualenv, the system-wide Python interpreter is always used.
Currently, I have Python 2.6.2 and Python 2.7.1 installed with several virtualenvs created from each version. I'm using Ubuntu 10.04 where the system-default Python is 2.6.5. Every time I compile vim and call :python import sys; print(sys.version) it returns Python 2.6.5.
configure --prefix=/virtualenv/project --enable-pythoninterp=yes --with-python-config-dir=/virtualenv/project/lib/python2.6/config
Results in the following in config.log:
...
configure:5151: checking --enable-pythoninterp argument
configure:5160: result: yes
configure:5165: checking for python
configure:5195: result: /usr/bin/python
...
It should be /virtualenv/project/bin/python. Is there any way to specify the Python interpreter for vim to use?
NOTE: I can confirm that /virtualenv/project/bin appears at the front of PATH environment variable.
I'd recommend building vim against the 2 interpreters, then invoking it using the shell script I provided below to point it to a particular virtualenv.
I was able to build vim against Python 2.7 using the following command (2.7 is installed under $HOME/root):
% LD_LIBRARY_PATH=$HOME/root/lib PATH=$HOME/root/bin:$PATH \
./configure --enable-pythoninterp \
--with-python-config-dir=$HOME/root/lib/python2.7/config \
--prefix=$HOME/vim27
% make install
% $HOME/bin/vim27
:python import sys; print sys.path[:2]
['/home/pat/root/lib/python27.zip', '/home/pat/root/lib/python2.7']
Your virtualenv is actually a thin wrapper around the Python interpreter it was created with -- $HOME/foobar/lib/python2.6/config is a symlink to /usr/lib/python2.6/config.
So if you created it with the system interpreter, VIM will probe for this and ultimately link against the real interpreter, using the system sys.path by default, even though configure will show the virtualenv's path:
% PATH=$HOME/foobar/bin:$PATH ./configure --enable-pythoninterp \
--with-python-config-dir=$HOME/foobar/lib/python2.6/config \
--prefix=$HOME/foobar
..
checking for python... /home/pat/foobar/bin/python
checking Python's configuration directory... (cached) /home/pat/foobar/lib/python2.6/config
..
% make install
% $HOME/foobar/bin/vim
:python import sys; print sys.path[:1]
['/usr/lib/python2.6']
The workaround: Since your system vim is most likely compiled against your system python, you don't need to rebuild vim for each virtualenv: you can just drop a shell script named vim in your virtualenv's bin directory, which extends the PYTHONPATH before calling system vim:
Contents of ~/HOME/foobar/bin/vim:
#!/bin/sh
ROOT=`cd \`dirname $0\`; cd ..; pwd`
PYTHONPATH=$ROOT/lib/python2.6/site-packages /usr/bin/vim $*
When that is invoked, the virtualenv's sys.path is inserted:
% $HOME/foobar/bin/vim
:python import sys; print sys.path[:2]
['/home/pat/foobar/lib/python2.6/site-packages', '/usr/lib/python2.6']
For what it's worth, and no one seems to have answered this here, I had some luck using a command line like the following:
vi_cv_path_python=/usr/bin/python26 ./configure --includedir=/usr/include/python2.6/ --prefix=/home/bcrowder/local --with-features=huge --enable-rubyinterp --enable-pythoninterp --disable-selinux --with-python-config-dir=/usr/lib64/python2.6/config
I would like to give a similar solution to crowder's that works quite well for me.
Imagine you have Python installed in /opt/Python-2.7.5 and that the structure of that folder is
$ tree -d -L 1 /opt/Python-2.7.5/
/opt/Python-2.7.5/
├── bin
├── include
├── lib
└── share
and you would like to build vim with that version of Python. All you need to do is
$ vi_cv_path_python=/opt/Python-2.7.5/bin/python ./configure --enable-pythoninterp --prefix=/SOME/FOLDER
Thus, just by explicitly giving vi_cv_path_python variable to configure the script will deduce everything on it's own (even the config-dir).
This was tested multiple times on vim 7.4+ and lately with vim-7-4-324.
I was having this same issue with 3 different versions of python on my system.
for me the easiest thing was to change my $PATH env variable so that the folder that has the version of python I wanted was (in my case /usr/local/bin) was found before another.
During my compiling vim80, the system python is 2.6, I have another python 2.7 under ~/local/bin, I find that, to make the compiling work:
update $PATH to place my python path ahead
add a soft link, ln -s python python2 ( the configure file would try to locate python config by probing python2 )
make distclean before re-run ./configure to make sure no cached wrong value is picked.