We are using python:3.9 image for the base, and run some command on that.
Base Image
########################
# Base Image Section #
########################
#
# Creates an image with the common requirements for a flask app pre-installed
# Start with a smol OS
FROM python:3.9
# Install basic requirements
RUN apt-get -q update -o Acquire::Languages=none && apt-get -yq install --no-install-recommends \
apt-transport-https \
ca-certificates && \
apt-get autoremove -yq && apt-get clean && rm -rf "/var/lib/apt/lists"/*
# Install CA certs
# Prefer the mirror for package downloads
COPY ["ca_certs/*.crt", "/usr/local/share/ca-certificates/"]
RUN update-ca-certificates && \
mv /etc/apt/sources.list /etc/apt/sources.list.old && \
printf 'deb https://mirror.company.com/debian/ buster main contrib non-free\n' > /etc/apt/sources.list && \
cat /etc/apt/sources.list.old >> /etc/apt/sources.list
# Equivalent to `cd /app`
WORKDIR /app
# Fixes a host of encoding-related bugs
ENV LC_ALL=C.UTF-8
# Tells `apt` and others that no one is sitting at the keyboard
ENV DEBIAN_FRONTEND=noninteractive
# Set a more helpful shell prompt
ENV PS1='[\u#\h \W]\$ '
#####################
# ONBUILD Section #
#####################
#
# ONBUILD commands take effect when another image is built using this one as a base.
# Ref: https://docs.docker.com/engine/reference/builder/#onbuild
#
#
# And that's it! The base container should have all your dependencies and ssl certs pre-installed,
# and will copy your code over when used as a base with the "FROM" directive.
ONBUILD ARG BUILD_VERSION
ONBUILD ARG BUILD_DATE
# Copy our files into the container
ONBUILD ADD . .
# pre_deps: packages that need to be installed before code installation and remain in the final image
ONBUILD ARG pre_deps
# build_deps: packages that need to be installed before code installation, then uninstalled after
ONBUILD ARG build_deps
# COMPILE_DEPS: common packages needed for building/installing Python packages. Most users won't need to adjust this,
# but you could specify a shorter list if you didn't need all of these.
ONBUILD ARG COMPILE_DEPS="build-essential python3-dev libffi-dev libssl-dev python3-pip libxml2-dev libxslt1-dev zlib1g-dev g++ unixodbc-dev"
# ssh_key: If provided, writes the given string to ~/.ssh/id_rsa just before Python package installation,
# and deletes it before the layer is written.
ONBUILD ARG ssh_key
# If our python package is installable, install system packages that are needed by some python libraries to compile
# successfully, then install our python package. Finally, delete the temporary system packages.
ONBUILD RUN \
if [ -f setup.py ] || [ -f requirements.txt ]; then \
install_deps="$pre_deps $build_deps $COMPILE_DEPS" && \
uninstall_deps=$(python3 -c 'all_deps=set("'"$install_deps"'".split()); to_keep=set("'"$pre_deps"'".split()); print(" ".join(sorted(all_deps-to_keep)), end="")') && \
apt-get -q update -o Acquire::Languages=none && apt-get -yq install --no-install-recommends $install_deps && \
if [ -n "${ssh_key}" ]; then \
mkdir -p ~/.ssh && chmod 700 ~/.ssh && printf "%s\n" "${ssh_key}" > ~/.ssh/id_rsa && chmod 600 ~/.ssh/id_rsa && \
printf "%s\n" "StrictHostKeyChecking=no" > ~/.ssh/config && chmod 600 ~/.ssh/config || exit 1 ; \
fi ; \
if [ -f requirements.txt ]; then \
pip3 install --no-cache-dir --compile -r requirements.txt || exit 1 ; \
elif [ -f setup.py ]; then \
pip3 install --no-cache-dir --compile --editable . || exit 1 ; \
fi ; \
if [ -n "${ssh_key}" ]; then \
rm -rf ~/.ssh || exit 1 ; \
fi ; \
fi
We build this image last year, and it was working fine, but we decided to use latest changes and build new base image, once we build it, it start failing for last RUN command.
DEBU[0280] Deleting in layer: map[]
INFO[0281] Cmd: /bin/sh
INFO[0281] Args: [-c if [ -f setup.py ] || [ -f requirements.txt ]; then install_deps="$pre_deps $build_deps $COMPILE_DEPS" && uninstall_deps=$(python3 -c 'all_deps=set("'"$install_deps"'".split()); to_keep=set("'"$pre_deps"'".split()); print(" ".join(sorted(all_deps-to_keep)), end="")') && apt-get -q update -o Acquire::Languages=none && apt-get -yq install --no-install-recommends $install_deps && if [ -n "${ssh_key}" ]; then mkdir -p ~/.ssh && chmod 700 ~/.ssh && printf "%s\n" "${ssh_key}" > ~/.ssh/id_rsa && chmod 600 ~/.ssh/id_rsa && printf "%s\n" "StrictHostKeyChecking=no" > ~/.ssh/config && chmod 600 ~/.ssh/config || exit 1 ; fi ; if [ -f requirements.txt ]; then pip3 install --no-cache-dir --compile -r requirements.txt || exit 1 ; elif [ -f setup.py ]; then pip3 install --no-cache-dir --compile --editable . || exit 1 ; fi ; if [ -n "${ssh_key}" ]; then rm -rf ~/.ssh || exit 1 ; fi ; fi]
INFO[0281] Running: [/bin/sh -c if [ -f setup.py ] || [ -f requirements.txt ]; then install_deps="$pre_deps $build_deps $COMPILE_DEPS" && uninstall_deps=$(python3 -c 'all_deps=set("'"$install_deps"'".split()); to_keep=set("'"$pre_deps"'".split()); print(" ".join(sorted(all_deps-to_keep)), end="")') && apt-get -q update -o Acquire::Languages=none && apt-get -yq install --no-install-recommends $install_deps && if [ -n "${ssh_key}" ]; then mkdir -p ~/.ssh && chmod 700 ~/.ssh && printf "%s\n" "${ssh_key}" > ~/.ssh/id_rsa && chmod 600 ~/.ssh/id_rsa && printf "%s\n" "StrictHostKeyChecking=no" > ~/.ssh/config && chmod 600 ~/.ssh/config || exit 1 ; fi ; if [ -f requirements.txt ]; then pip3 install --no-cache-dir --compile -r requirements.txt || exit 1 ; elif [ -f setup.py ]; then pip3 install --no-cache-dir --compile --editable . || exit 1 ; fi ; if [ -n "${ssh_key}" ]; then rm -rf ~/.ssh || exit 1 ; fi ; fi]
error building image: error building stage: failed to execute command: starting command: fork/exec /bin/sh: exec format error
We label the image, based on date, to know when it was working, we have base image, build on 12-09-22 works fine.
Something new in python:3.9 cause this issue. Same script was working.
Related
I'm using a dockerfile which uses tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim-2021-06-09 as base image and installs the required linux package and also installs r-recommended and r-base. Earlier below dockerfile works fine. But when I tried to update the image with tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim-2023-01-09 as base image, unable to install the r-recommended and r-base with version 4.1.2-1~bustercran.0.
Dockerfile :
# Download IspR from IspR project pipeline and extract the folder and rename it as r-scripts. Place the r-scripts directory in backend root directory.
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim-2023-01-09
COPY key_gnupg.gpg /app/key_gnupg.gpg
COPY packages.txt /app/packages.txt
RUN echo "Acquire::Check-Valid-Until \"false\";\nAcquire::Check-Date \"false\";" | cat > /etc/apt/apt.conf.d/10no--check-valid-until
RUN apt-get update && apt-get install -y gnupg2=2.2.27-2+deb11u2 && \
echo "deb http://cloud.r-project.org/bin/linux/debian buster-cran40/" >> /etc/apt/sources.list.d/cran.list && \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B8F25A8A73EACF41 && \
apt-get update
RUN apt-get install --no-install-recommends -y $(cat /app/packages.txt) && \
rm -rf /var/lib/apt/lists/* && \
apt-get purge --auto-remove && \
apt-get clean
COPY . /app
WORKDIR /app/r-scripts
RUN R -e "install.packages('remotes')"
RUN renv_version=`cat renv.lock | grep -A3 "renv" | grep -e "Version" | cut -d ':' -f2 | sed "s/\"//g" | sed "s/\,//g"|awk '{$1=$1};1'` && \
R -e "remotes::install_github('rstudio/renv#${renv_version}')" && \
rm -rf /app
CMD ["/bin/bash"]
packages.txt
git=1:2.20.1-2+deb10u3
pkg-config=0.29-6
liblapack-dev=3.8.0-2
gfortran=4:8.3.0-1
libxml2=2.9.4+dfsg1-7+deb10u3
libxml2-dev=2.9.4+dfsg1-7+deb10u3
libssl-dev=1.1.1n-0+deb10u1
libcurl4-openssl-dev=7.64.0-4+deb10u2
libnlopt-dev=2.4.2+dfsg-8+b1
libpcre2-8-0=10.32-5
build-essential=12.6
r-recommended=4.1.2-1~bustercran.0
r-base=4.1.2-1~bustercran.0
curl=7.64.0-4+deb10u2
postgresql=11+200+deb10u4
libpq-dev=11.14-0+deb10u1
libblas-dev=3.8.0-2
libgomp1=8.3.0-6
zlib1g-dev=1:1.2.11.dfsg-1
zlib1g=1:1.2.11.dfsg-1
Error MEssage :
E: Version '4.1.2-1~bustercran.0' for 'r-base-core' was not found
How to install the 4.1.2 version of r-base using this Dockerfile?
I need to create a Dockerfile that emulates a normal workspace.
We have a virtual machine where we train models.
We Use R and Python3.
I want to automate some of the processes without changing the codebase.
e.g. ~ must point to a /home/<some user>
Biggest problem is Anaconda3 in docker. because every RUN is a standalone login.
Basis for my answer: https://github.com/xychelsea/anaconda3-docker/blob/main/Dockerfile
I've created my own mini R package installer:
install_r_packages.sh
#!/bin/bash
input="r-requirements.txt"
Rscript -e "install.packages('remotes')"
IFS='='
while IFS= read -r line; do
read -r package version <<<$line
package=$(echo "$package" | sed 's/ *$//g')
version=$(echo "$version" | sed 's/ *$//g')
if ! [[ ($package =~ ^#.*) || (-z $package) ]]; then
Rscript -e "remotes::install_version('$package', version = '$version')"
fi
done <$input
r-requirement
# packages for rmarkdown
htmltools=0.5.2
jsonlite=1.7.2
...
rmarkdown=2.11
# more packages
...
Dockerfile
FROM debian:bullseye
RUN apt-get update
# install R
RUN apt-get install -y r-base r-base-dev libatlas3-base r-recommended libssl-dev openssl \
libcurl4-openssl-dev libfontconfig1-dev libxml2-dev xml2 pandoc lua5.3 clang
ENV ARROW_S3=ON \
LIBARROW_MINIMAL=false \
LIBARROW_BINARY=true \
RSTUDIO_PANDOC=/usr/lib/rstudio-server/bin/pandoc \
TZ=Etc/UTC
COPY r-requirements.txt .
COPY scripts/install_r_packages.sh scripts/install_r_packages.sh
RUN bash scripts/install_r_packages.sh
# create user
ENV REPORT_USER="reporter"
ENV PROJECT_HOME=/home/${REPORT_USER}/<project>
RUN useradd -ms /bin/bash ${REPORT_USER} \
&& mkdir /data \
&& mkdir /opt/mlflow \
&& chown -R ${REPORT_USER}:${REPORT_USER} /data \
&& chown -R ${REPORT_USER}:${REPORT_USER} /opt/mlflow
# copy project files
WORKDIR ${PROJECT_HOME}
COPY src src
... bla bla bla ...
COPY requirements.txt .
RUN chown -R ${REPORT_USER}:${REPORT_USER} ${PROJECT_HOME}
# Install python Anaconda env
ENV ANACONDA_PATH="/opt/anaconda3"
ENV PATH=${ANACONDA_PATH}/bin:${PATH}
ENV ANACONDA_INSTALLER=Anaconda3-2021.11-Linux-x86_64.sh
RUN mkdir ${ANACONDA_PATH} \
&& chown -R ${REPORT_USER}:${REPORT_USER} ${ANACONDA_PATH}
RUN apt-get install -y wget
USER ${REPORT_USER}
RUN wget https://repo.anaconda.com/archive/${ANACONDA_INSTALLER} \
&& /bin/bash ${ANACONDA_INSTALLER} -b -u -p ${ANACONDA_PATH} \
&& chown -R ${REPORT_USER} ${ANACONDA_PATH} \
&& rm -rvf ~/${ANACONDA_INSTALLER}.sh \
&& echo ". ${ANACONDA_PATH}/etc/profile.d/conda.sh" >> ~/.bashrc \
&& echo "conda activate base" >> ~/.bashrc
RUN pip3 install --upgrade pip \
&& pip3 install -r requirements.txt \
&& pip3 install awscli
# run training and report
ENV PYTHONPATH=/home/${REPORT_USER}/<project> \
MLFLOW_TRACKING_URI=... \
MLFLOW_EXPERIMENT_NAME=...
CMD dvc config core.no_scm true \
&& dvc repro
I'm trying to run a cronjob inside a docker container, and the logs (created with python logging) from docker logs my_container or from /var/log/cron.log. Neither is working. I tried a bunch of solutions I found in stackoverflow.
This is my Dockerfile:
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/Minsk
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update && apt-get install -y \
python3-dev \
python3-tk \
python3-pip \
libglib2.0-0\
libsm6 \
postgresql-server-dev-all \
postgresql-common \
openssh-client \
libxext6 \
nano \
pkg-config \
rsync \
cron \
&& \
apt-get clean && \
apt-get autoremove && \
rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade setuptools
RUN pip3 install numpy
ADD requirements.txt /requirements.txt
RUN pip3 install -r /requirements.txt && rm /requirements.txt
RUN touch /var/log/cron.log
COPY crontab /etc/cron.d/cjob
RUN chmod 0644 /etc/cron.d/cjob
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
ENV PYTHONUNBUFFERED 1
ADD . /code
WORKDIR /code
COPY ssh_config /etc/ssh/ssh_config
CMD cron -f
and this is how I run it:
nvidia-docker run -d \
-e DISPLAY=unix$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /media/storage:/opt/images/ \
-v /home/user/.aws/:/root/.aws/ \
--net host \
my_container
I tried different things such as:
Docker ubuntu cron tail logs not visible
See cron output via docker logs, without using an extra file
But I don't get any logs.
Change your chmod code to 755 if you're trying to execute something from there. You might also want to add an -R parameter while at that.
Next, add the following to your Dockerfile before chmod layer.
# Symlink the cron to stdout
RUN ln -sf /dev/stdout /var/log/cron.log
And add this as your final layer
# Run the command on container startup
CMD cron && tail -F /var/log/cron.log 2>&1
Referenced this from the first link that you mentioned. This should work.
Trying to create a docker image that has Python3 and Node v4.4.7 so that I can use it as a container for my project that needs both Python and that version of Node.
# Pull base image.
FROM python:3-onbuild
CMD [ "python", "./hello.py" ]
# Install Node.js
RUN \
cd /tmp && \
wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && \
tar xvzf node-v4.4.7.tar.gz && \
rm -f node-v4.4.7.tar.gz && \
cd node-v* && \
./configure && \
CXX="g++ -Wno-unused-local-typedefs" make && \
CXX="g++ -Wno-unused-local-typedefs" make install && \
cd /tmp && \
rm -rf /tmp/node-v* && \
npm install -g npm && \
print '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
When I first tried it complained about not having a python script to run so added a basic python file: hello.py
that just has this:
print "Hello, Python!"
Then it complains about not having a requirements.txt file so added an empty requirements.txt
Now when I run docker build -t isaacweathersnet/sampledockerimage . it snafus during the node install with
node-v4.4.0/benchmark/arrays/zero-int.js
File "./configure", line 446
'''
^
SyntaxError: Missing parentheses in call to 'print'
The command '/bin/sh -c cd /tmp && wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && tar xvzf node-v4.4.7.tar.gz && rm -f node-v4.4.7.tar.gz && cd node-v* && ./configure && CXX="g++ -Wno-unused-local-typedefs" make && CXX="g++ -Wno-unused-local-typedefs" make install && cd /tmp && rm -rf /tmp/node-v* && npm install -g npm && print '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc' returned a non-zero code: 1
Found the solution on Github that had Python and Node. No luck with Python 3+ but worked well with 2.7
https://github.com/nsdont/python-node/blob/master/Dockerfile
FROM python:2.7
RUN \
cd /tmp && \
wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && \
tar xvzf node-v4.4.7.tar.gz && \
rm -f node-v4.4.7.tar.gz && \
cd node-v* && \
./configure && \
CXX="g++ -Wno-unused-local-typedefs" make && \
CXX="g++ -Wno-unused-local-typedefs" make install && \
cd /tmp && \
rm -rf /tmp/node-v* && \
npm install -g npm && \
echo -e '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
There are nodejs-python and python-nodejs (which is built on top of nodejy-python). It's worth to have a look into there.
python-nodejs provides Node 10.x, npm 6.x, yarn stable, Python latest, pip latest and pipenv latest. The versions used should be adjustable to your version needs. Use the Dockerfile as basis and adjust the RUN section
RUN \
echo "deb https://deb.nodesource.com/node_10.x stretch main" > /etc/apt/sources.list.d/nodesource.list && \
wget -qO- https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list && \
wget -qO- https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
apt-get update && \
apt-get install -yqq nodejs yarn && \
pip install -U pip && pip install pipenv && \
npm i -g npm#^6 && \
rm -rf /var/lib/apt/lists/*
to the Node version you need. The yarn (dependency management alternative to nmp) and in case you need yarn) part can be removed.
when trying to install PyV8 in ubuntu, and type the command:
python setup.py build
then it display this error:
error: command 'c++' failed with exit status 1
anybody have solution about this?
Here is what I have in my Dockerfile. The following is tested and runs in production on top of Debian Stretch. I recommend using exactly the PyV8 / V8 setup that I'm using - I've spent at least a week to figure out which combination doesn't lead to memory leaks. I also recommend reading through the discussion and the JSContext fix here and here.
In short, support for PyV8 is almost non-existent - either you use it just as a toy, or you follow exactly this recipe, or you spend a significant amount of time and effort to fork the repo and make it better. If starting fresh, I recommend using Node-JS instead and communicate through some IPC method with Python.
ENV MY_HOME /home/forge
ENV MY_LIB $FORGE_HOME/lib
# preparing dependencies for V8 and PyV8
ENV V8_HOME $MY_LIB/v8
RUN apt-get update && \
apt-get install -y libboost-thread-dev \
libboost-all-dev \
libboost-dev \
libboost-python-dev \
autoconf \
libtool \
systemtap \
scons
# compiling an older version of boost, required for this version of V8
RUN mkdir -p $MY_LIB/boost && cd $MY_LIB/boost && \
wget http://sourceforge.net/projects/boost/files/boost/1.54.0/boost_1_54_0.tar.gz && tar -xvzf boost_1_54_0.tar.gz && cd $MY_LIB/boost/boost_1_54_0 && \
./bootstrap.sh && \
./b2 install --prefix=/usr/local --with-python --with-thread && \
ldconfig && \
ldconfig /usr/local/lib
# preparing gcc 4.9 - anything newer will lead to errors with the V8 codebase
ENV CC "gcc-4.9"
ENV CPP "gcc-4.9 -E"
ENV CXX "g++-4.9"
ENV PATH_BEFORE_V8 "${MY_HOME}/bin:${PATH}"
ENV PATH "${MY_HOME}/bin:${PATH}"
RUN echo "deb http://ftp.us.debian.org/debian/ jessie main contrib non-free" >> /etc/apt/sources.list && \
echo "deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free" >> /etc/apt/sources.list && \
apt-get update && \
apt-get install -y gcc-4.9 g++-4.9 && \
mkdir -p ${MY_HOME}/bin && cd ${MY_HOME}/bin && \
ln -s /usr/bin/${CC} ${MY_HOME}/bin/gcc && \
ln -s /usr/bin/${CC} ${MY_HOME}/bin/x86_64-linux-gnu-gcc && \
ln -s /usr/bin/${CXX} ${MY_HOME}/bin/g++ && \
ln -s /usr/bin/${CXX} ${MY_HOME}/bin/x86_64-linux-gnu-g++
# compiling a specific version of V8 and PyV8, since older combos lead to memory leaks
RUN git clone https://github.com/muellermichel/V8_r10452.git $V8_HOME && \
git clone https://github.com/muellermichel/PyV8_r429.git $MY_LIB/pyv8 && \
cd $MY_LIB/pyv8 && python setup.py build && python setup.py install
# cleaning up
RUN PATH=${PATH_BEFORE_V8} && \
head -n -2 /etc/apt/sources.list > ${MY_HOME}/sources.list.temp && \
mv ${MY_HOME}/sources.list.temp /etc/apt/sources.list && \
apt-get update
ENV PATH "${PATH_BEFORE_V8}"
ENV CC ""
ENV CPP ""
ENV CXX ""
older version that depends on the now defunct googlecode and was made for Ubuntu 12.04:
export MY_LIB_FOLDER=[PUT-YOUR-DESIRED-INSTALL-PATH-HERE]
apt-get install -y libboost-thread-dev
apt-get install -y libboost-all-dev
apt-get install -y libboost-dev
apt-get install -y libboost-python-dev
apt-get install -y git-core autoconf libtool systemtap
apt-get install -y subversion
apt-get install -y wget
mkdir -p $MY_LIB_FOLDER/boost && cd $MY_LIB_FOLDER/boost && wget http://sourceforge.net/projects/boost/files/boost/1.54.0/boost_1_54_0.tar.gz && tar -xvzf boost_1_54_0.tar.gz
cd $MY_LIB_FOLDER/boost/boost_1_54_0 && ./bootstrap.sh && ./b2 install --prefix=/usr/local --with-python --with-thread && ldconfig && ldconfig /usr/local/lib
svn checkout -r10452 http://v8.googlecode.com/svn/trunk/ $MY_LIB_FOLDER/v8
export V8_HOME=$MY_LIB_FOLDER/v8
svn checkout -r429 http://pyv8.googlecode.com/svn/trunk/ $MY_LIB_FOLDER/pyv8
git clone https://github.com/taguchimail/pyv8-linux-x64.git $MY_LIB_FOLDER/pyv8-taguchimail && cd $MY_LIB_FOLDER/pyv8-taguchimail && git checkout origin/stable
apt-get install -y scons
cd $MY_LIB_FOLDER/pyv8 && patch -p0 < $MY_LIB_FOLDER/pyv8-taguchimail/patches/pyv8.patch && python setup.py build && python setup.py install
Had the same problem and this worked for me:
export LIB=~
apt-get install -y curl libboost-thread-dev libboost-all-dev libboost-dev libboost-python-dev git-core autoconf libtool
svn checkout -r19632 http://v8.googlecode.com/svn/trunk/ $LIB/v8
export V8_HOME=$LIB/v8
svn checkout http://pyv8.googlecode.com/svn/trunk/ $LIB/pyv8 && cd $LIB/pyv8 && python setup.py build && python setup.py install
Solution found in comments here - https://code.google.com/p/pyv8/wiki/HowToBuild
I'm using a Debian based distro. Here's how I installed PyV8 (you'll need to
have git installed):
cd /usr/share
sudo git clone https://github.com/emmetio/pyv8-binaries.git
cd pyv8-binaries/
sudo unzip pyv8-linux64.zip
sudo cp -a PyV8.py _PyV8.so /usr/bin