How to install lessc and nodejs in a Python virtualenv? - python

I would like to install a nodejs script (lessc) into a virtualenv.
How can I do that ?
Thanks
Natim

I like shorrty's answer, he recommended using nodeenv, see:
is there an virtual environment for node.js?
I followed this guide:
http://calvinx.com/2013/07/11/python-virtualenv-with-node-environment-via-nodeenv/
All I had to do myself was:
. ../bin/activate # switch to my Python virtualenv first
pip install nodeenv # then install nodeenv (nodeenv==0.7.1 was installed)
nodeenv --python-virtualenv # Use current python virtualenv
npm install -g less # install lessc in the virtualenv

Here is what I used so far, but it may be optimized I think.
Install nodejs
wget http://nodejs.org/dist/v0.6.8/node-v0.6.8.tar.gz
tar zxf node-v0.6.8.tar.gz
cd node-v0.6.8/
./configure --prefix=/absolute/path/to/the/virtualenv/
make
make install
Install npm (Node Package Manager)
/absolute/path/to/the/virtualenv/bin/activate
curl https://npmjs.org/install.sh | sh
Install lesscss
npm install less -g
When you activate your virtualenv you can use lessc

I created a bash script to automate Natim's solution.
Makes sure your Python virtualenv is active and just run the script. NodeJS, NPM and lessc will be downloaded and installed into your virtualenv.
http://pastebin.com/wKLWgatq
#!/bin/sh
#
# This script will download NodeJS, NPM and lessc, and install them into you Python
# virtualenv.
#
# Based on a post by Natim:
# http://stackoverflow.com/questions/8986709/how-to-install-lessc-and-nodejs-in-a-python-virtualenv
NODEJS="http://nodejs.org/dist/v0.8.3/node-v0.8.3.tar.gz"
# Check dependencies
for dep in gcc wget curl tar make; do
which $dep > /dev/null || (echo "ERROR: $dep not found"; exit 10)
done
# Must be run from virtual env
if [ "$VIRTUAL_ENV" = "" ]; then
echo "ERROR: you must activate the virtualenv first!"
exit 1
fi
echo "1) Installing nodejs in current virtual env"
echo
cd "$VIRTUAL_ENV"
# Create temp dir
if [ ! -d "tmp" ]; then
mkdir tmp
fi
cd tmp || (echo "ERROR: entering tmp directory failed"; exit 4)
echo -n "- Entered temp dir: "
pwd
# Download
fname=`basename "$NODEJS"`
if [ -f "$fname" ]; then
echo "- $fname already exists, not downloading"
else
echo "- Downloading $NODEJS"
wget "$NODEJS" || (echo "ERROR: download failed"; exit 2)
fi
echo "- Extracting"
tar -xvzf "$fname" || (echo "ERROR: tar failed"; exit 3)
cd `basename "$fname" .tar.gz` || (echo "ERROR: entering source directory failed"; exit 4)
echo "- Configure"
./configure --prefix="$VIRTUAL_ENV" || (echo "ERROR: configure failed"; exit 5)
echo "- Make"
make || (echo "ERROR: build failed"; exit 6)
echo "- Install "
make install || (echo "ERROR: install failed"; exit 7)
echo
echo "2) Installing npm"
echo
curl https://npmjs.org/install.sh | sh || (echo "ERROR: install failed"; exit 7)
echo
echo "3) Installing lessc with npm"
echo
npm install less -g || (echo "ERROR: lessc install failed"; exit 8)
echo "Congratulations! lessc is now installed in your virtualenv"

I will provide my generic solution to works with Gems and NPMs inside a virtualenv
Gems and Npm support to be customized via env settings : GEM_HOME and npm_config_prefix
You can stick the snippet below in your postactivate or activate script ( it s more matter if you use virtualenvwrapper or not)
export GEM_HOME="$VIRTUAL_ENV/lib/gems"
export GEM_PATH=""
PATH="$GEM_HOME/bin:$PATH"
export npm_config_prefix=$VIRTUAL_ENV
export PATH
Now , inside your virtualenv all libs installed via gem install or npm -g install will be installed in your virtualenv and binary added in your PATH
if you are using virtualenvwrapper you can make the change global to all your virtualenv if you modify the postactivate living inside your $VIRTUALENVWRAPPER_HOOK_DIR
This solution don't cover installing nodejs inside the virtualenv but I think that it s better to delegate this task to the packaging system (apt,yum,brew..) and install node and npm globally
Edit :
I recently created 2 plugin for virtualenvwrapper to do this automatically. There is one for gem and npm :
http://pypi.python.org/pypi/virtualenvwrapper.npm
http://pypi.python.org/pypi/virtualenvwrapper.gem

Related

Compilation error when I tried to build packages for NumPy and SciPy using Clang that I built from sources

I tried to build wheel packages of NumPy and SciPy using Clang and Clang++ for RISC-V. I used cross-compilation on Ubuntu 20.04 under WSL.
Firstly, I built from sources LLVM from the standard repository, RISC-V GCC Toolchain from the branch rvv-next, and used these toolchains to build the latest OpenBLAS from sources.
After this, I install the virtual environment of a cross-compilation, and activated it.
Next, I installed necessary Python packages, and started building NumPy and SciPy.
As a result, I got the wheel package for NumPy, but the compilation error instead of the wheel package for SciPy. The compilation error is
scipy/spatial/qhull.c:10158:72: error: incompatible function pointer types passing 'void (*)(qhT *, void *, vertexT *, vertexT *, setT *, unsigned int)' (aka 'void (*)(struct qhT *, void *, struct vertexT *, struct vertexT *, struct setT *, unsigned int)') to parameter of type 'printvridgeT' (aka 'void (*)(struct qhT *, struct _IO_FILE *, struct vertexT *, struct vertexT *, struct setT *, unsigned int)') [-Wincompatible-function-pointer-types]
(void)(qh_eachvoronoi_all(__pyx_v_self->_qh, ((void *)__pyx_v_self), (&__pyx_f_5scipy_7spatial_5qhull__visit_voronoi), (__pyx_v_self->_qh[0]).UPPERdelaunay, qh_RIDGEall, 1));
The file scipy/spatial/qhull.c was generated by Cython, when the build process was running.
More precisely, I did like this:
cd ~
mkdir wheels-clang
cd wheels-clang
mkdir other-blas scripts
./scripts/build_gcc_and_clang.sh "$HOME/wheels-clang"
./scripts/download_build_and_install_openblas_using_llvm_toolchain.sh "$HOME/wheels-clang/other-blas" "$HOME/wheels-clang/other-blas/OpenBLAS/usr"
export SYSROOT=/opt/riscv/sysroot
export RISCV_GCC=$HOME/wheels-clang/scripts/llvm-clang-wrapper.sh
export RISCV_GPP=$HOME/wheels-clang/scripts/llvm-clang++-wrapper.sh
export RISCV_GFORTRAN=/opt/riscv/bin/riscv64-unknown-linux-gnu-gfortran
export RISCV_LD=/opt/riscv/bin/riscv64-unknown-linux-gnu-ld
export RISCV_AR=/opt/riscv/bin/riscv64-unknown-linux-gnu-ar
path-to-ubuntu-wsl-python3.9.14/bin/python3 -m pip install crossenv
path-to-ubuntu-wsl-python3.9.14/bin/python3 -m crossenv --cc $RISCV_GCC --cxx $RISCV_GPP --ar $RISCV_AR path-to-riscv-python3.9.14/bin/python riscv_cross_venv_stock
source riscv_cross_venv_stock/bin/activate
build-pip install wheel cython pybind11 pythran ply beniget gast pythran numpy==1.19.3
pip install wheel cython pybind11 pythran ply beniget gast pythran numpy==1.19.3
git clone --recurse-submodules -b v1.7.3 --depth 1 https://github.com/scipy/scipy.git
cd scipy/
echo '[openblas]' > site.cfg
echo 'libraries = openblas' >> site.cfg
echo "library_dirs = $HOME/wheels-clang/other-blas/OpenBLAS/usr/lib" >> site.cfg
echo "include_dirs = $HOME/wheels-clang/other-blas/OpenBLAS/usr/include" >> site.cfg
echo "runtime_library_dirs = $HOME/wheels-clang/other-blas/OpenBLAS/usr/lib" >> site.cfg
export BLAS_DIR=$HOME/wheels-clang/other-blas/OpenBLAS/usr/
export OPEN_BLAS_DIR=$HOME/wheels-clang/other-blas/OpenBLAS/usr/
export F90=$RISCV_GFORTRAN
export F77=$RISCV_GFORTRAN
export LD=$RISCV_LD
export CC=$HOME/wheels-clang/scripts/llvm-clang-wrapper.sh
export CXX=$HOME/wheels-clang/scripts/llvm-clang++-wrapper.sh
export LD_LIBRARY_PATH=$HOME/wheels-clang/other-blas/OpenBLAS/usr/lib:$LD_LIBRARY_PATH
export LDFLAGS="$LDFLAGS -L$HOME/wheels-clang/other-blas/OpenBLAS/usr/lib"
export CPPFLAGS="$CPPFLAGS -I$HOME/wheels-clang/other-blas/OpenBLAS/usr/include"
export PKG_CONFIG_PATH="$HOME/wheels-clang/other-blas/OpenBLAS/usr/lib/pkgconfig"
export BLAS=$HOME/wheels-clang/other-blas/OpenBLAS/usr/lib/libopenblas.so
pip wheel . -v -w ../wheels_stock
cd ..
cp riscv_cross_venv_stock/cross/lib/python3.9/site-packages/numpy/core/lib/libnpymath.a riscv_cross_venv_stock/build/lib/python3.9/site-packages/numpy/core/lib/libnpymath.a
cp riscv_cross_venv_stock/cross/lib/python3.9/site-packages/numpy/random/lib/libnpyrandom.a riscv_cross_venv_stock/build/lib/python3.9/site-packages/numpy/random/lib/libnpyrandom.a
cd scipy/
pip wheel . -v -w ../wheels_stock
I used the following scripts:
the script build_gcc_and_clang.sh:
#!/bin/bash
function install_prerequisites()
{
sudo apt -y install autoconf automake autotools-dev
sudo apt -y install libmpc-dev libmpfr-dev libgmp-dev
sudo apt -y install build-essential
sudo apt -y install bison flex
sudo apt -y install curl python3
sudo apt -y install texinfo gperf libtool patchutils bc
sudo apt -y install zlib1g-dev libexpat-dev
sudo apt -y install make cmake git-all
}
function create_installation_dirs()
{
sudo mkdir /opt/riscv
sudo chown -R $USER /opt/riscv
}
function download_and_build_gcc()
{
git clone https://github.com/riscv/riscv-gnu-toolchain --branch rvv-next
cd riscv-gnu-toolchain/
./configure --with-arch=rv64gcv --prefix=/opt/riscv --enable-linux
make linux
}
function download_and_build_llvm()
{
git clone https://github.com/llvm/llvm-project
cd llvm-project
mkdir build
cd build
cmake -G 'Unix Makefiles' -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/opt/riscv -DLLVM_TARGETS_TO_BUILD="RISCV" -DLLVM_DEFAULT_TARGET_TRIPLE="riscv64-unknown-linux-gnu" -DLLVM_ENABLE_PROJECTS="clang;llvm;lld;flang" ../llvm
make -j4
make install
}
DIR_FOR_SOURCES="$1"
if [ -z "$DIR_FOR_SOURCES" ]
then
DIR_FOR_SOURCES="$HOME"
fi
echo "The parent directory for GCC and LLVM sources: $DIR_FOR_SOURCES"
install_prerequisites
create_installation_dirs
cd "$DIR_FOR_SOURCES"
download_and_build_gcc
cd ..
download_and_build_llvm
cd ..
the script download_build_and_install_openblas_using_llvm_toolchain.sh:
#!/bin/bash
function clone_openblas()
{
git clone https://github.com/xianyi/OpenBLAS.git
cd OpenBLAS
sed -i 's/ifeq/#ifeq/g' Makefile.riscv64
sed -i 's/endif/#endif/g' Makefile.riscv64
sed -i 's/CCOMMON_OPT += -march=rv64imafdcv0p7_zfh_xtheadc -mabi=lp64d -mtune=c920/CCOMMON_OPT += --target=riscv64-unknown-linux-gnu -march=rv64gcv_zfh -mabi=lp64d --sysroot=$(SYSROOT)/g' Makefile.riscv64
sed -i 's/FCOMMON_OPT += -march=rv64imafdcv0p7_zfh_xtheadc -mabi=lp64d -mtune=c920 -static/FCOMMON_OPT += -march=rv64gcv_zfh -mabi=lp64d -static --sysroot=$(SYSROOT)/g' Makefile.riscv64
sed -i 's/TARGET_FLAGS = -march=rv64gcv0p7_zfh_xtheadc -mabi=lp64d/TARGET_FLAGS = --target=riscv64-unknown-linux-gnu -march=rv64gcv_zfh -mabi=lp64d/g' Makefile.prebuild
}
function setup_environment_vars()
{
HOSTCC=clang
CROSS_SUFFIX=llvm-
ROOT=/opt/riscv
CC=$ROOT/bin/clang
FC=/opt/riscv/bin/riscv64-unknown-linux-gnu-gfortran
AR=$ROOT/bin/${CROSS_SUFFIX}ar
RANLIB=$ROOT/bin/${CROSS_SUFFIX}ranlib
SYSROOT=/opt/riscv/sysroot
GCC_TOOLCHAIN=/opt/riscv
CFLAGS=" -fuse-ld=lld --gcc-toolchain=$GCC_TOOLCHAIN"
}
function build_openblas()
{
make -j4 TARGET=C910V CC=$CC FC=$FC AR=$AR RANLIB=$RANLIB HOSTCC=$HOSTCC SYSROOT=$SYSROOT CROSS_SUFFIX=$CROSS_SUFFIX CFLAGS="$CFLAGS" V=1
if ! [ -d "$1" ]
then
echo "The defined installation directory $1 does not exist and will be created."
mkdir -p "$1"
fi
make install PREFIX="$1"
cd ..
}
DIR_FOR_SOURCES="$1"
INSTALLATION_DIR="$2"
if [ -z "$DIR_FOR_SOURCES" ]
then
echo "A directory for OpenBLAS sources was not defined. The directory $HOME will be used by default."
DIR_FOR_SOURCES="$HOME"
fi
if [ -z "$INSTALLATION_DIR" ]
then
echo "Creating the default directory to install."
sudo mkdir /opt/OpenBLAS
sudo chown -R $USER /opt/OpenBLAS
INSTALLATION_DIR="/opt/OpenBLAS"
echo "A directory to install OpenBLAS was not defined. The directory $INSTALLATION_DIR will be used by default."
fi
echo "Directory for BLAS sources: $HOME"
echo "Installation directory: $INSTALLATION_DIR"
cd "$DIR_FOR_SOURCES"
clone_openblas
setup_environment_vars
build_openblas "$INSTALLATION_DIR"
echo "OpenBLAS was installed into $INSTALLATION_DIR"
the script llvm-clang-wrapper.sh:
#!/bin/bash
GCC_TOOLCHAIN=/opt/riscv
/opt/riscv/bin/clang --target=riscv64-unknown-linux-gnu -march=rv64gcv_zfh -mabi=lp64d -v --sysroot=$SYSROOT -fuse-ld=lld --gcc-toolchain=$GCC_TOOLCHAIN "$#"
the script llvm-clang++-wrapper.sh:
#!/bin/bash
GCC_TOOLCHAIN=/opt/riscv
/opt/riscv/bin/clang++ --target=riscv64-unknown-linux-gnu -march=rv64gcv_zfh -mabi=lp64d -v --sysroot=$SYSROOT -fuse-ld=lld --gcc-toolchain=$GCC_TOOLCHAIN "$#"
When I tried to use the standard Ubuntu cross-compilation GCC toolchain for RISC-V, I have no any compilation errors, and I get correct NumPy and SciPy packages for RISC-V.
More precisely, I did the following.
Install necessary deb-packages:
sudo apt install make cmake gcc g++ ccache gcc-riscv64-linux-gnu g++-riscv64-linux-gnu gfortran-riscv64-linux-gnu libgomp1-riscv64-cross libssl-dev meson
Download and build OpenBLAS:
mkdir wheels
cd wheels/
git clone https://github.com/xianyi/OpenBLAS.git
cd OpenBLAS/
sed -i "s/CCOMMON_OPT += -march=rv64imafdcv0p7_zfh_xtheadc -mabi=lp64d -mtune=c920/CCOMMON_OPT += -march=rv64imafdc -mabi=lp64d/g" Makefile.riscv64
sed -i "s/FCOMMON_OPT += -march=rv64imafdcv0p7_zfh_xtheadc -mabi=lp64d -mtune=c920 -static/FCOMMON_OPT += -march=rv64imafdc -mabi=lp64d -static/g" Makefile.riscv64
make -j4 HOSTCC=gcc CC=riscv64-linux-gnu-gcc FC=riscv64-linux-gnu-gfortran ARCH=riscv64 TARGET=RISCV64_GENERIC
mkdir usr
make PREFIX=$HOME/wheels/OpenBLAS/usr/ install
Set up some environment variables:
export RISCV_GCC=riscv64-linux-gnu-gcc
export RISCV_GPP=riscv64-linux-gnu-g++
export RISCV_GFORTRAN=riscv64-linux-gnu-gfortran
export RISCV_LD=riscv64-linux-gnu-ld
export RISCV_AR=riscv64-linux-gnu-ar
Install crossenv package and create a virtual environment:
path-to-ubuntu-wsl-python3.9.14/bin/python3 -m pip install crossenv
path-to-ubuntu-wsl-python3.9.14/bin/python3 -m crossenv --cc $RISCV_GCC --cxx $RISCV_GPP --ar $RISCV_AR path-to-riscv-python3.9.14/bin/python riscv_cross_venv_stock
source riscv_cross_venv_stock/bin/activate
build-pip install wheel cython pybind11 pythran ply beniget gast pythran numpy==1.19.3
pip install wheel cython pybind11 pythran ply beniget gast pythran numpy==1.19.3
cd ..
Clone SciPy:
git clone --recurse-submodules -b v1.7.3 --depth 1 https://github.com/scipy/scipy.git
cd scipy
Create the file site.cfg:
echo '[openblas]' >> site.cfg
echo 'libraries = openblas' >> site.cfg
echo "library_dirs = $HOME/wheels/OpenBLAS/usr/lib" >> site.cfg
echo "include_dirs = $HOME/wheels/OpenBLAS/usr/include" >> site.cfg
echo "runtime_library_dirs = $HOME/wheels/OpenBLAS/usr/lib" >> site.cfg
Set up the following environment variables:
export BLAS_DIR=$HOME/wheels/OpenBLAS/usr/
export OPEN_BLAS_DIR=$HOME/wheels/OpenBLAS/usr/
export F90=$RISCV_GFORTRAN
export F77=$RISCV_GFORTRAN
export LD=$RISCV_LD
export LD_LIBRARY_PATH=$HOME/wheels/OpenBLAS/usr/lib:$LD_LIBRARY_PATH
Build NumPy and SciPy:
pip wheel . -v -w ../wheels_stock
cd ../OpenBLAS/
cp riscv_cross_venv_stock/cross/lib/python3.9/site-packages/numpy/core/lib/libnpymath.a riscv_cross_venv_stock/build/lib/python3.9/site-packages/numpy/core/lib/libnpymath.a
cp riscv_cross_venv_stock/cross/lib/python3.9/site-packages/numpy/random/lib/libnpyrandom.a riscv_cross_venv_stock/build/lib/python3.9/site-packages/numpy/random/lib/libnpyrandom.a
cd ~/wheels/scipy/
pip wheel . -v -w ../wheels_stock
As a result, I got the correct wheel packages of NumPy and SciPy for RISC-V.
Also, I tried to use Clang/Clang++ and GFortran compiler from the standard Ubuntu 20.04 repos, but I got the strange linker error. Unfortunately, I forgot it.
I would like to know how I can build NumPy and SciPy using cross-compilation with Clang for RISC-V target machine.

What could be keeping `conda pack` from picking up monkey patches to the packages?

I am trying to monkey patch a Python package before using conda pack to package up all of the packages for deployment.
The script sets up conda:
conda install -y --channel conda-forge conda-pack
conda create -y --name venv python=3.7
conda install -y --name venv --file requirements.txt
Then it monkey patches the library:
sed --in-place \
's/CFUNCTYPE(c_int)(lambda: None)/# CCCFUNCTYPE(c_int)(lambda: None)/g' \
/opt/conda/envs/venv/lib/python3.7/ctypes/__init__.py
Then it packages everything up for deployment:
conda pack --name venv --output "$BUILD_DIR/runtime.tar.gz"
So the weird thing is that when I copy the file directly into the build folder:
cp /opt/conda/envs/venv/lib/python3.7/ctypes/__init__.py "$BUILD_DIR"
The monkey-patched file is there.
However, when I extract $BUILD_DIR/runtime.tar.gz, the file is in its original form.
The other weird behavior is that when I manually run these steps, the monkey patched file is in $BUILD_DIR/runtime.tar.gz.
There's quite a bit of containers around, so I thought that maybe conda is using some catched tarball instead so I tried to add this into the script:
conda clean --tarballs
But that still didn't fix the problem.
I also tried to use conda pack's explicit path option, but it didn't work either:
conda pack --prefix /opt/conda/envs/venv --output "$BUILD_DIR/runtime.explicit.tar.gz"
Does conda pack pull the files from another location aside from: /opt/conda/envs/venv/lib/python3.7/site-packages
This doesn't explain why doing things manually would work, but maybe it'll point me to a new rock to look under.
Thank you for your time 🙏
Here is the entire script:
#!/usr/bin/env bash
#
# Bundle this project into a Rapid Deployment Archive (RDA)
set -ex
###########################
# Pre-reqs and pre-checks #
###########################
if [ ! -x "$(command -v conda)" ]; then
echo "conda is required to run this script" >&2
exit 1
fi
if [ -n "$CI_PROJECT_DIR" ]; then
DIR="$CI_PROJECT_DIR"
else
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd)/.."
fi
DIST_DIR="$DIR/dist"
rm -rvf "$DIST_DIR"/*.zip
BUILD_DIR="$(mktemp -d)"
#####################################
# Build client Angular UI component #
#####################################
CLIENT_DIR="$DIR/client"
mkdir -pv "$BUILD_DIR/client"
pushd "$CLIENT_DIR" || exit 1
npm install
npm run build:default
cp -R "$CLIENT_DIR/dist/template-angular-ts-master/"* "$BUILD_DIR/client"
popd
#######################################
# Build server Python/flask component #
#######################################
# Packages are installed to /opt/conda/envs/$VENV_NAME/lib/python$PYTHON_VERSION/site-packages
if [ -n "$SSL_NO_VERIFY" ]; then
conda config --set ssl_verify false
fi
conda install -y --channel conda-forge conda-pack
conda create -y --name venv python=3.7
# Conda does not support -r spec or --file within a file
cp requirements/prod.txt requirements/prod.txt.noflag
sed -i '/^-r/d' requirements/prod.txt.noflag
conda install -y --name venv --file requirements/_base.txt --file requirements/prod.txt.noflag
sed --in-place \
's/CFUNCTYPE(c_int)(lambda: None)/# CCCFUNCTYPE(c_int)(lambda: None)/g' \
/opt/conda/envs/venv/lib/python3.7/ctypes/__init__.py
rm -f requirements/prod.txt.noflag
conda clean --tarballs
conda pack --name venv --output "$BUILD_DIR/runtime.tar.gz"
# Junk within pyapp that might be present if not building in CI
if [ -z "$CI_PROJECT_DIR" ]; then
find "$DIR" -name '*.pyc' -type f -delete
find "$DIR" -name '.DS_Store' -type f -delete
find "$DIR" -name '__pycache__' -type d -delete
fi
# Copy this project's stuff into build dir
cp -v -R "$DIR/config" "$DIR/pyapp" "$BUILD_DIR"
cp -v "$DIR"/rda/* "$BUILD_DIR"
cp -v setup.{py,cfg} pyproject.toml "$BUILD_DIR"
cp -v "$DIR"/scripts/{start-server.sh,wsgi.py} "$BUILD_DIR"
cp -v /opt/conda/envs/venv/lib/python3.7/ctypes/__init__.py "$BUILD_DIR"
# Try to extract the version and appKey if we have jq
if [ -x "$(command -v jq)" ]; then
VERSION="-$(jq -j '.version' rda/rda.manifest)"
appKey="$(jq -j .appKey rda/rda.manifest)"
else
VERSION=''
appKey="$(grep --color=never -oP 'appKey":\s+"\K[^"]+' rda/rda.manifest)"
fi
if [ -z "$appKey" ]; then
appKey="my.webapp.ng-py"
fi
# Bundle into RDA ZIP
mkdir -pv "$DIST_DIR"
pushd "$BUILD_DIR"
zip -q -9 -r "$DIST_DIR/${appKey}${VERSION}.rda.zip" *
popd
rm -rf "$BUILD_DIR"
ls -1 -l -F "$DIST_DIR"/*.zip
conda clean -afy
I wasn't able to get the monkey patching to work, but I was able to figure out that ctypes is not part of numpy and rather is part of Python's standard library. So conda pack could very well treat Python standard libraries a bit differently.
So I gave up on monkey patching and found out that upgrading my Python version fixed the underlying issue.
Thanks 🙏

How to activate an Anaconda environment in a Singularity recipe

I am trying to create a singularity image and recipe that will create an anaconda environment and then activate said environment so I can build the python wheel of a project in that environment so it's 100% installed and functional after the singularity build is completed.
Bootstrap: docker
From: nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
%environment
# use bash as default shell
SHELL=/bin/bash
# add CUDA paths
CPATH="/usr/local/cuda/include:$CPATH"
PATH="/usr/local/cuda/bin:$PATH"
LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
CUDA_HOME="/usr/local/cuda"
# add Anaconda path
PATH="/usr/local/anaconda3/bin:$PATH"
export PATH LD_LIBRARY_PATH CPATH CUDA_HOME
export MKL_NUM_THREADS=1
export OPENBLAS_NUM_THREADS=1
%setup
# runs on host
# the path to the image is $SINGULARITY_ROOTFS
%post
# post-setup script
# load environment variables
. /environment
# use bash as default shell
echo "\n #Using bash as default shell \n" >> /environment
echo 'SHELL=/bin/bash' >> /environment
# make environment file executable
chmod +x /environment
# default mount paths
mkdir /scratch /data
#Add CUDA paths
echo "\n #Cuda paths \n" >> /environment
echo 'export CPATH="/usr/local/cuda/include:$CPATH"' >> /environment
echo 'export PATH="/usr/local/cuda/bin:$PATH"' >> /environment
echo 'export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"' >> /environment
echo 'export CUDA_HOME="/usr/local/cuda"' >> /environment
# updating and getting required packages
apt-get update
apt-get install -y wget git vim build-essential cmake
# creates a build directory
mkdir build
cd build
# download and install Anaconda
CONDA_INSTALL_PATH="/usr/local/anaconda3"
wget https://repo.continuum.io/archive/Anaconda3-5.0.1-Linux-x86_64.sh
chmod +x Anaconda3-5.0.1-Linux-x86_64.sh
./Anaconda3-5.0.1-Linux-x86_64.sh -b -p $CONDA_INSTALL_PATH
# download and install CaImAn
git clone https://github.com/flatironinstitute/CaImAn.git
cd CaImAn
conda env create -n caiman -f environment.yml
source activate caiman
pip install .
caimanmanager.py install
source deactivate
%runscript
# executes with the singularity run command
# delete this section to use existing docker ENTRYPOINT command
%test
# test that script is a success
I've tried both conda activate and source activate and get the same error for both.
+ source activate caiman
/bin/sh: 41: source: not found
ABORT: Aborting with RETVAL=255
Cleaning up...
Is this just something I have to do afterwards by making the image writable?
That would be the next default solution, but it would be nice if the recipe could just work.
*Edit 1
. activate caiman returns.
+ . activate caiman
+ [[ -n ]]
/bin/sh: 4: /usr/local/anaconda3/bin/activate: [[: not found
+ [[ -n ]]
/bin/sh: 7: /usr/local/anaconda3/bin/activate: [[: not found
+ echo Only bash and zsh are supported
Only bash and zsh are supported
+ return 1
ABORT: Aborting with RETVAL=255
Cleaning up...
*Edit 2
By using a newer version of Anaconda, the not found error goes away. All I did was change the Anaconda distribution I got with wget, and I also forced and update just to be doubly sure.
# download and install Anaconda
CONDA_INSTALL_PATH="/usr/local/anaconda3"
wget https://repo.continuum.io/archive/Anaconda3-5.3.1-Linux-x86_64.sh
chmod +x Anaconda3-5.3.1-Linux-x86_64.sh
./Anaconda3-5.3.1-Linux-x86_64.sh -b -p $CONDA_INSTALL_PATH
conda update -n base -c defaults conda
pip install --upgrade pip
If I am not wrong (which is totally possible) the same happens with virtualenv.
The problem is source is not a command, try:
. activate caiman
instead of
source activate caiman
Editing after updated question, check this https://github.com/conda/conda/issues/6639 you might want to investigate what your activate is doing (seems to be looking for non existing files)

Automate Python application installation

I have a Java application that should interact with Python agents installed on remote machines. My main application has a SSH access to these remote machines. To make user's live simpler, I want to automate process of agent installation, so user can just push button and main application connects, install and run application remotely.
I expect that:
remotes will be different UNIX for now (Windows then);
SSH user may not be root;
Python is installed on the remote machine;
Remote machine may not have an Internet connection.
My Python agents use virtualenv and pip, have number of dependencies. I've made a script to download virtualenv and dependencies as tar.gz / zip:
#!/usr/bin/env bash
# Should be the same in dist.sh and install_dist.sh.
VIRTUALENV_VERSION=13.0.3
VIRTUALENV_BASE_URL=https://pypi.python.org/packages/source/v/virtualenv
echo "Recreating dist directory..."
if [ -d "dist" ]; then
rm -rf dist
fi
mkdir -p dist
echo "Copying agent sources..."
cp -r src dist
cp requirements.txt dist
cp agent.sh dist
cp install_dist.sh dist
echo "Downloading virtualenv..."
curl -o dist/virtualenv-$VIRTUALENV_VERSION.tar.gz -O $VIRTUALENV_BASE_URL/virtualenv-$VIRTUALENV_VERSION.tar.gz
echo "Downloading requirements..."
mkdir dist/libs
./env/bin/pip install --download="dist/libs" --no-binary :all: -r requirements.txt
echo "Packing dist directory..."
tar cvzf dist.tar.gz dist
When installation starts, my main app scp all this archive to the remote machine, installs virtualenv and requirements, copy all required scripts:
#!/usr/bin/env bash
# Runs remotely.
SCRIPT_DIR="`dirname \"$0\"`"
SCRIPT_DIR="`( cd \"$SCRIPT_DIR\" && pwd )`"
HOME_DIR="`( cd \"$SCRIPT_DIR/..\" && pwd )`"
VIRTUALENV_VERSION=13.0.3
echo "Installing virtualenv..."
tar xzf $SCRIPT_DIR/virtualenv-$VIRTUALENV_VERSION.tar.gz --directory $SCRIPT_DIR
python $SCRIPT_DIR/virtualenv-$VIRTUALENV_VERSION/virtualenv.py $HOME_DIR/env
if [ $? -ne 0 ]
then
echo "Unable to create virtualenv."
exit 1
fi
$HOME_DIR/env/bin/pip install $SCRIPT_DIR/virtualenv-$VIRTUALENV_VERSION.tar.gz
if [ $? -ne 0 ]
then
echo "Unable to install virtualenv."
exit 1
fi
echo "Installing requirements..."
$HOME_DIR/env/bin/pip install --no-index --find-links="$SCRIPT_DIR/libs" -r $SCRIPT_DIR/requirements.txt
if [ $? -ne 0 ]
then
echo "Unable to install requirements."
exit 1
fi
echo "Copying agent sources..."
cp -r $SCRIPT_DIR/src $HOME_DIR
if [ $? -ne 0 ]
then
echo "Unable to copy agent sources."
exit 1
fi
cp -r $SCRIPT_DIR/agent.sh $HOME_DIR
if [ $? -ne 0 ]
then
echo "Unable to copy agent.sh."
exit 1
fi
cp -r $SCRIPT_DIR/requirements.txt $HOME_DIR
echo "Cleaning installation files..."
rm -rf $HOME_DIR/dist.tar.gz
rm -rf $SCRIPT_DIR
I faced a problem with building some of the dependencies remotely - for example, they may require gcc or other libraries that should be installed with sudo manually... may be, I should use pre-compiled wheels where possible, providing them per each target system? Or may be you see some better way to implement this task?
Maybe deliver fully packed applications ? Have a look at Freeze and py2exe.
Also, if you planned on delivering for windows environment, notice that compiling require a lots and is quite annoying, prefer sending libraries pre-compiled with your app. For linux environment, well, it mostly depend of the environment so you probably gonna stick with building depedencies.
Note: From your comment:
providing distribution for each target system requires building them
on each target system
Notice that you can do cross-compilation which allow you do build for different system (even Windows) without the need to have a running machine with this environment.
I would go ahead with building with cross compile. Worst case scenario is very little hosts will be annoying and won't be able to use your binaries. In this scenario just solve the problem case by case.
Best luck :)

Setting virtualenv to use a compiled from source python as bin

I need to force a virtualenv to use a compiled source python on my ci server (long story short: travis ci support python 2.7.3. heroku works with 2.7.6 and we insist on testing in the same environment as production) . But I fail to get virtualenv to run against it.
travis first runs this script:
if [ ! -d ./compiled ]; then
echo "creating compiled folder"
mkdir compiled
else
echo "compiled exists"
fi
cd compiled
if [ ! -e Python-2.7.6.tar.xz ]; then
echo "Downloading python and compiling"
wget http://www.python.org/ftp/python/2.7.6/Python-2.7.6.tar.xz
tar xf Python-2.7.6.tar.xz
cd Python-2.7.6
./configure
make
chmod +x ./python
else
echo "Compiled python exists!"
fi
and then:
- virtualenv -p ./python ./compiled/python276
- source ./compiled/python276/bin/activate
but when then doing python --version shows 2.7.3 instead of 2.7.6
Guess I'm missing something, Thanks for the help!
Go to the virtualenv folder, and open bin/ folder:
~/.Virtualenv/my_project/bin
Remove 'python' file, and create a symbolic link to the python executable, that you want to use, like:
cd ~/.Virtualenv/my_project/bin
mv python python-bkp
ln -s /usr/bin/python .

Categories

Resources