It used to take ~5 minutes for our Airflow deployment's docker image to build, and all of a sudden it is taking over an hour. With that said we haven't had to rebuild our image in a few months, so not sure when the issue came to be...
It looks like https://stackoverflow.com/questions/65122957/resolving-new-pip-backtracking-runtime-issue is the culprit. We're seeing a lot of warnings that look like this during build:
=> => # Downloading google_cloud_os_login-2.3.1-py2.py3-none-any.whl (42 kB)
=> => # INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints
=> => # to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press
=> => # Ctrl + C.
=> => # Downloading google_cloud_os_login-2.2.1-py2.py3-none-any.whl (41 kB)
=> => # Downloading google_cloud_os_login-2.2.0-py2.py3-none-any.whl (44 kB)
Here is the line in our Dockerfile that is taking the hour+
RUN set -ex \
&& buildDeps=' \
freetds-dev \
libkrb5-dev \
libsasl2-dev \
libssl-dev \
libffi-dev \
libpq-dev \
git \
' \
&& apt-get update -yqq \
&& apt-get install -yqq --no-install-recommends \
$buildDeps \
freetds-bin \
build-essential \
apt-utils \
curl \
rsync \
netcat \
locales \
&& sed -i 's/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g' /etc/locale.gen \
&& locale-gen \
&& update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 \
&& useradd -ms /bin/bash -d ${AIRFLOW_USER_HOME} airflow \
&& pip install -U pip setuptools wheel \
&& pip install pytz \
&& pip install pyOpenSSL \
&& pip install ndg-httpsclient \
&& pip install pyasn1 \
&& pip install apache-airflow[crypto,postgres,slack,kubernetes,gcp,docker,ssh]==${AIRFLOW_VERSION} \
&& if [ -n "${PYTHON_DEPS}" ]; then pip install ${PYTHON_DEPS}; fi \
&& apt-get purge --auto-remove -yqq $buildDeps \
&& apt-get autoremove -yqq --purge \
&& apt-get clean \
&& rm -rf \
/tmp/* \
/var/tmp/* \
/usr/share/man \
/usr/share/doc \
/usr/share/doc-base \
/var/lib/apt/lists/*
...
...
COPY requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
and here is our requirements.txt
google-cloud-core==1.4.1
google-cloud-datastore==1.15.0
gcsfs==0.6.1
flatten-dict==0.4.2
bigquery_schema_generator==1.4
backoff==1.11.1
six==1.13.0
ndjson==0.3.1
pymongo==3.1.2
SQLAlchemy==1.3.15
pandas==1.3.1
numpy==1.21.1
billiard
I am actually quite confused about this specific warning message referring to google_cloud_os_login because the build step that is hanging is the line I shared starting with RUN set -ex, which doesn't look to have any google cloud installations? We install some google cloud stuff via requirements.txt (-core, -datastore), but the lines to COPY and RUN pip install on requirements.txt are much lower in our dockerfile (as indicated by the ...). These warnings pop up for many libraries, however it does seem like this google_cloud_os_login is a major culprit taking a significant amount of time.
Where in the RUN set -ex ... command is it prompting to install google_cloud_os_login? And how can we set a specific version number on this library in order to speed up the build of this docker image?
I think the various google packages you're seeing are dependencies of apache-airflow[gcp].
To speed up the install, the documentation recommends you use one of the constraint files they provide. They create tags named constraints-<version> that contain files you can pass to pip with --constraint.
For example, when trying to install 2.2.0, there is a constraints-2.2.0 tag. In this tag's file tree, you'll see files like constraints-3.8.txt, where 3.8 is the python version I'm using.
pip install apache-airflow[gcp]==2.2.0 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.0/constraints-3.8.txt"
Related
I'm using a python:3.7.4-slim-buster docker image and I can't change it.
I'm wondering how to use my nvidia gpus on it.
I usually used a tensorflow/tensorflow:1.14.0-gpu-py3 and with a simple --runtime=nvidia int the docker run command everything worked fine, but now I have this constraint.
I think that no shortcut exists on this type of image so I was following this guide https://towardsdatascience.com/how-to-properly-use-the-gpu-within-a-docker-container-4c699c78c6d1, building the Dockerfile it proposes:
FROM python:3.7.4-slim-buster
RUN apt-get update && apt-get install -y build-essential
RUN apt-get --purge remove -y nvidia*
ADD ./Downloads/nvidia_installers /tmp/nvidia > Get the install files you used to install CUDA and the NVIDIA drivers on your host
RUN /tmp/nvidia/NVIDIA-Linux-x86_64-331.62.run -s -N --no-kernel-module > Install the driver.
RUN rm -rf /tmp/selfgz7 > For some reason the driver installer left temp files when used during a docker build (i dont have any explanation why) and the CUDA installer will fail if there still there so we delete them.
RUN /tmp/nvidia/cuda-linux64-rel-6.0.37-18176142.run -noprompt > CUDA driver installer.
RUN /tmp/nvidia/cuda-samples-linux-6.0.37-18176142.run -noprompt -cudaprefix=/usr/local/cuda-6.0 > CUDA samples comment if you dont want them.
RUN export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64 > Add CUDA library into your PATH
RUN touch /etc/ld.so.conf.d/cuda.conf > Update the ld.so.conf.d directory
RUN rm -rf /temp/* > Delete installer files.
But it raises an error:
ADD failed: stat /var/lib/docker/tmp/docker-builder080208872/Downloads/nvidia_installers: no such file or directory
What can I change to easily let the docker image see my gpus?
TensorFlow image split into several 'partial' Dockerfiles. One of them contains all dependencies TensorFlow needs to operate on GPU. Using it you can easily create a custom image, you only need to change default python to whatever version you need. This seem to me a much easier job than bringing NVIDIA's stuff into Debian image (which AFAIK is not officially supported for CUDA and/or cuDNN).
Here's the Dockerfile:
# TensorFlow image base written by TensorFlow authors.
# Source: https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/tools/dockerfiles/partials/ubuntu/nvidia.partial.Dockerfile
# -------------------------------------------------------------------------
ARG ARCH=
ARG CUDA=10.1
FROM nvidia/cuda${ARCH:+-$ARCH}:${CUDA}-base-ubuntu${UBUNTU_VERSION} as base
# ARCH and CUDA are specified again because the FROM directive resets ARGs
# (but their default value is retained if set previously)
ARG ARCH
ARG CUDA
ARG CUDNN=7.6.4.38-1
ARG CUDNN_MAJOR_VERSION=7
ARG LIB_DIR_PREFIX=x86_64
ARG LIBNVINFER=6.0.1-1
ARG LIBNVINFER_MAJOR_VERSION=6
# Needed for string substitution
SHELL ["/bin/bash", "-c"]
# Pick up some TF dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-${CUDA/./-} \
# There appears to be a regression in libcublas10=10.2.2.89-1 which
# prevents cublas from initializing in TF. See
# https://github.com/tensorflow/tensorflow/issues/9489#issuecomment-562394257
libcublas10=10.2.1.243-1 \
cuda-nvrtc-${CUDA/./-} \
cuda-cufft-${CUDA/./-} \
cuda-curand-${CUDA/./-} \
cuda-cusolver-${CUDA/./-} \
cuda-cusparse-${CUDA/./-} \
curl \
libcudnn7=${CUDNN}+cuda${CUDA} \
libfreetype6-dev \
libhdf5-serial-dev \
libzmq3-dev \
pkg-config \
software-properties-common \
unzip
# Install TensorRT if not building for PowerPC
RUN [[ "${ARCH}" = "ppc64le" ]] || { apt-get update && \
apt-get install -y --no-install-recommends libnvinfer${LIBNVINFER_MAJOR_VERSION}=${LIBNVINFER}+cuda${CUDA} \
libnvinfer-plugin${LIBNVINFER_MAJOR_VERSION}=${LIBNVINFER}+cuda${CUDA} \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*; }
# For CUDA profiling, TensorFlow requires CUPTI.
ENV LD_LIBRARY_PATH /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:$LD_LIBRARY_PATH
# Link the libcuda stub to the location where tensorflow is searching for it and reconfigure
# dynamic linker run-time bindings
RUN ln -s /usr/local/cuda/lib64/stubs/libcuda.so /usr/local/cuda/lib64/stubs/libcuda.so.1 \
&& echo "/usr/local/cuda/lib64/stubs" > /etc/ld.so.conf.d/z-cuda-stubs.conf \
&& ldconfig
# -------------------------------------------------------------------------
#
# Custom part
FROM base
ARG PYTHON_VERSION=3.7
RUN apt-get update && apt-get install -y --no-install-recommends --no-install-suggests \
python${PYTHON_VERSION} \
python3-pip \
python${PYTHON_VERSION}-dev \
# Change default python
&& cd /usr/bin \
&& ln -sf python${PYTHON_VERSION} python3 \
&& ln -sf python${PYTHON_VERSION}m python3m \
&& ln -sf python${PYTHON_VERSION}-config python3-config \
&& ln -sf python${PYTHON_VERSION}m-config python3m-config \
&& ln -sf python3 /usr/bin/python \
# Update pip and add common packages
&& python -m pip install --upgrade pip \
&& python -m pip install --upgrade \
setuptools \
wheel \
six \
# Cleanup
&& apt-get clean \
&& rm -rf $HOME/.cache/pip
You can take from here: change python version to one you need (and which is available in Ubuntu repositories), add packages, code, etc.
I am trying to create an image to run a FastAPI app in docker and the software that I am using in the APIrequires Ubuntu 16.04. While I am trying to install python packages, I am getting the following error (while other packages are being correctly installed):
No matching distribution found for fastapi
Here is my Docker file code:
FROM ubuntu:16.04
LABEL maintainer="sai"
COPY ./app /api/api
COPY requirements.txt ./requirements.txt
RUN apt-get update \
&& apt install python3-pip -y \
&& pip3 install --upgrade pip==20.0.1 \
&& pip install -r requirements.txt
RUN apt-get update && \
apt-get install -y --no-install-recommends \
g++ \
make \
automake \
autoconf \
bzip2 \
unzip \
wget \
sox \
libtool \
git \
subversion \
python2.7 \
python3 \
zlib1g-dev \
gfortran \
ca-certificates \
patch \
ffmpeg \
vim && \
rm -rf /var/lib/apt/lists/*
RUN ln -s /usr/bin/python2.7 /usr/bin/python
#other toolkit installation commands
ENV PYTHONPATH=/api
WORKDIR /api
EXPOSE 8000
ENTRYPOINT ["uvicorn"]
CMD ["api.main:app", "--host", "0.0.0.0"]
I am new to docker so excuse me for my mistakes. I have a working api i need to dockerize it
also the api involves creating and deleting file and folders is this ok with Docker?
Note
I also tried upgrading pip to latest version but didn't work
any pointers to further helpful resources in dockerizing the api are most welcome
I am trying to build Python and Pip from source. Following is the Dockerfile for it:
FROM buildpack-deps:stretch
ARG PYTHON_VERSION="3.6.5"
ARG PIP_VERSION="20.2"
ARG INSTALLATION_PREFIX="/opt/python/${PYTHON_VERSION}"
WORKDIR /usr/src/python
RUN wget -q https://www.python.org/ftp/python/$PYTHON_VERSION/Python-${PYTHON_VERSION}.tgz \
&& apt-get update \
&& apt-get install -y \
make \
build-essential \
libssl-dev \
zlib1g-dev \
libbz2-dev \
libreadline-dev \
libsqlite3-dev \
wget \
curl \
llvm \
libncurses5-dev \
libncursesw5-dev \
xz-utils \
tk-dev
RUN tar xvf Python-${PYTHON_VERSION}.tgz \
&& rm -f Python-${PYTHON_VERSION}.tgz
RUN cd Python-$PYTHON_VERSION \
&& ./configure \
--prefix=${INSTALLATION_PREFIX} \
--build=$(dpkg-architecture --query DEB_BUILD_GNU_TYPE) \
--enable-loadable-sqlite-extensions \
--enable-shared \
--with-system-expat \
--with-system-ffi \
--without-ensurepip \
&& make -j $(nproc) \
&& make altinstall
ENV LD_LIBRARY_PATH="${INSTALLATION_PREFIX}/lib:$LD_LIBRARY_PATH"
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O /get-pip.py \
&& /usr/src/python/Python-3.6.5/python /get-pip.py \
--prefix ${INSTALLATION_PREFIX} \
--disable-pip-version-check \
--no-cache-dir \
--no-warn-script-location \
pip==${PIP_VERSION}
WORKDIR /
RUN mv ${INSTALLATION_PREFIX} /tmp/test
ENV LD_LIBRARY_PATH="/tmp/test/lib:$LD_LIBRARY_PATH"
ENV PATH="/tmp/test/bin:$PATH"
When I build the above Dockerfile, it builds fine and I am able to do python3.6 --version and pip --version and I get the expected results.
Now because of a requirement that I have(Imagine building different Python versions from source in a docker image and store these built runtimes somewhere and install them on demand to a location before using it), I need to move this built Python and Pip to a different location, for example /tmp/test...I was hoping that Python and Pip executables would work just as before but when I do a pip --version I get the following error:
root#12c243458190:/# pip --version
bash: /tmp/test/bin/pip: /opt/python/3.6.5/bin/python3.6: bad interpreter: No such file or directory
pip seems to have hard-coded the location of Python to the installation prefix it was given when it was built.
Any ideas on how to fix this issue?
I'm trying to install graph-tool for Anaconda Python 3.5 on Ubuntu 14.04 (x64), but it turns out that's a real trick.
I tried this approach, but run into the problem:
The following specifications were found to be in conflict:
- graph-tool
Use "conda info <package>" to see the dependencies for each package.
Digging through the dependencies led to a dead-end at gobject-introspection
So I tried another approach:
Installed boost with conda, then tried to ./configure, make, and make install graph-tool... which got about as far as ./configure:
===========================
Using python version: 3.5.2
===========================
checking for boostlib >= 1.54.0... yes
checking whether the Boost::Python library is available... yes
checking whether boost_python is the correct library... no
checking whether boost_python-py27 is the correct library... no
checking whether boost_python-py27 is the correct library... (cached) no
checking whether boost_python-py27 is the correct library... (cached) no
checking whether boost_python-py35 is the correct library... yes
checking whether the Boost::IOStreams library is available... yes
configure: error: Could not link against boost_python-py35 !
I know this is something about environment variables for the ./configure command and conda installing libboost to Anaconda's weird place, I just don't know what to do, and my Google-fu is failing me. So this is another dead end.
Can anyone who's had to install graph-tool recently in linux-64 give me a walkthrough? It's a fresh VM running in VMWare Workstation 10.0.7
For those that run into similar issues, try changing the order of conda channels first with:
$ conda config --add channels ostrokach
$ conda config --add channels defaults
$ conda config --add channels conda-forge
then:
$ conda install graph-tool
Installing graph-tool 2.26 for Anaconda Python 3.5, Ubuntu 14.04.
Note: as of me writing this, the ostrokach channel conda install of graph-tool was only at version 2.18.
Here's the docker file I use to install graph-tool 2.26. There's likely a cleaner way, but so far this is the only thing I've managed to cobble together that actually works.
NOTE: If you're unfamiliar with docker files and you'd just like to do the install from the terminal, ignore the first line (starting with FROM), ignore every occurrence of the word RUN, and what you're left with is a series of commands to execute in a terminal.
FROM [your 14.04 base image]
RUN conda upgrade -y conda
RUN conda upgrade -y matplotlib
RUN \
add-apt-repository -y ppa:ubuntu-toolchain-r/test && \
apt-get update -y && \
apt-get install -y gcc-5 g++-5 && \
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 60 --slave /usr/bin/g++ g++ /usr/bin/g++-5
RUN wget https://github.com/CGAL/cgal/archive/releases/CGAL-4.10.2.tar.gz && \
tar xzf CGAL-4.10.2.tar.gz && \
cd cgal-releases-CGAL-4.10.2/ && \
cmake . && \
make && \
make install
RUN cd /tmp && \
# note: master branch of repo appears relatively stable, has not been updated since 2016
git clone https://github.com/sparsehash/sparsehash.git && \
cd sparsehash && \
./configure && \
make && \
make install
RUN apt-get update
RUN apt-get install -y build-essential g++ python-dev autotools-dev libicu-dev build-essential libbz2-dev libboost-all-dev
RUN apt-get install -y autogen autoconf libtool shtool
# install boost
RUN cd /tmp && \
wget https://dl.bintray.com/boostorg/release/1.66.0/source/boost_1_66_0.tar.gz && \
tar xzvf boost_1_66_0.tar.gz && \
cd boost_1_66_0 && \
sudo ./bootstrap.sh --prefix=/usr/local && \
sudo ./b2 && \
sudo ./b2 install
# install newer cairo
RUN cd /tmp && \
wget https://cairographics.org/releases/cairo-1.14.12.tar.xz && \
tar xf cairo-1.14.12.tar.xz && \
cd cairo-1.14.12 && \
./configure && \
make && \
sudo make install
RUN cd /tmp && \
wget https://download.gnome.org/sources/libsigc++/2.99/libsigc++-2.99.10.tar.xz && \
tar xf libsigc++-2.99.10.tar.xz && \
cd libsigc++-2.99.10 && \
./configure && \
make && \
sudo make install && \
sudo cp ./sigc++config.h /usr/local/include/sigc++-3.0/sigc++config.h
RUN cd /tmp && \
wget https://www.cairographics.org/releases/cairomm-1.15.5.tar.gz && \
tar xf cairomm-1.15.5.tar.gz && \
cd cairomm-1.15.5 && \
./configure && \
make && \
sudo make install && \
sudo cp ./cairommconfig.h /usr/local/include/cairomm-1.16/cairomm/cairommconfig.h
RUN conda install -y -c conda-forge boost pycairo
RUN conda install -y -c numba numba=0.36.2
RUN conda install -y -c libboost py-boost && \
conda update -y cffi dbus expat pycairo pandas scipy numpy harfbuzz setuptools boost
RUN apt-get install -y apt-file dbus libdbus-1-dev && \
apt-file update
RUN apt-get install -y graphviz
RUN conda install -y -c conda-forge python-graphviz
RUN sudo apt-get install -y valgrind
RUN apt-get install -y libcgal-dev libcairomm-1.0 libcairomm-1.0-dev libcairo2-dev python-cairo-dev
RUN conda install -y -c conda-forge pygobject
RUN conda install -y -c ostrokach gtk
RUN cd /tmp && \
wget https://git.skewed.de/count0/graph-tool/repository/release-2.26/archive.tar.bz2 && \
bunzip2 archive.tar.bz2 && \
tar -xf archive.tar && \
cd graph-tool-release-2.26-b89e6b4e8c5dba675997d6f245b301292a5f3c59 && \
# Fix problematic parts of the graph-tool configure.ac file
sed -i 's/PKG_INSTALLDIR/#PKG_INSTALLDIR/' ./configure.ac && \
sed -i 's/AM_PATH_PYTHON(\[2\.7\])/AM_PATH_PYTHON(\[3\.5\])/' ./configure.ac && \
sed -i 's/\${PYTHON}/\/usr\/local\/anaconda3\/bin\/python/' ./configure.ac && \
sed -i '$a ACLOCAL_AMFLAGS = -I m4' ./Makefile.am && \
sudo ./autogen.sh && \
sudo ./configure CPPFLAGS="-I/usr/local/include -I/usr/local/anaconda3/pkgs/pycairo-1.15.4-py35h1b9232e_1/include -I/usr/local/include/cairo -I/usr/local/include/sigc++-3.0 -I/usr/include/freetype2" \
LDFLAGS="-L/usr/local/include -L/usr/local/lib/cairo -L/usr/local/include/sigc++-3.0 -L/usr/include/freetype2" \
PYTHON="/usr/local/anaconda3/bin/python" \
PYTHON_VERSION=3.5 \
sudo make && \
sudo make install
Warning: makeing graph-tool might take a couple hours and require >7 GB of ram.
when trying to install PyV8 in ubuntu, and type the command:
python setup.py build
then it display this error:
error: command 'c++' failed with exit status 1
anybody have solution about this?
Here is what I have in my Dockerfile. The following is tested and runs in production on top of Debian Stretch. I recommend using exactly the PyV8 / V8 setup that I'm using - I've spent at least a week to figure out which combination doesn't lead to memory leaks. I also recommend reading through the discussion and the JSContext fix here and here.
In short, support for PyV8 is almost non-existent - either you use it just as a toy, or you follow exactly this recipe, or you spend a significant amount of time and effort to fork the repo and make it better. If starting fresh, I recommend using Node-JS instead and communicate through some IPC method with Python.
ENV MY_HOME /home/forge
ENV MY_LIB $FORGE_HOME/lib
# preparing dependencies for V8 and PyV8
ENV V8_HOME $MY_LIB/v8
RUN apt-get update && \
apt-get install -y libboost-thread-dev \
libboost-all-dev \
libboost-dev \
libboost-python-dev \
autoconf \
libtool \
systemtap \
scons
# compiling an older version of boost, required for this version of V8
RUN mkdir -p $MY_LIB/boost && cd $MY_LIB/boost && \
wget http://sourceforge.net/projects/boost/files/boost/1.54.0/boost_1_54_0.tar.gz && tar -xvzf boost_1_54_0.tar.gz && cd $MY_LIB/boost/boost_1_54_0 && \
./bootstrap.sh && \
./b2 install --prefix=/usr/local --with-python --with-thread && \
ldconfig && \
ldconfig /usr/local/lib
# preparing gcc 4.9 - anything newer will lead to errors with the V8 codebase
ENV CC "gcc-4.9"
ENV CPP "gcc-4.9 -E"
ENV CXX "g++-4.9"
ENV PATH_BEFORE_V8 "${MY_HOME}/bin:${PATH}"
ENV PATH "${MY_HOME}/bin:${PATH}"
RUN echo "deb http://ftp.us.debian.org/debian/ jessie main contrib non-free" >> /etc/apt/sources.list && \
echo "deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free" >> /etc/apt/sources.list && \
apt-get update && \
apt-get install -y gcc-4.9 g++-4.9 && \
mkdir -p ${MY_HOME}/bin && cd ${MY_HOME}/bin && \
ln -s /usr/bin/${CC} ${MY_HOME}/bin/gcc && \
ln -s /usr/bin/${CC} ${MY_HOME}/bin/x86_64-linux-gnu-gcc && \
ln -s /usr/bin/${CXX} ${MY_HOME}/bin/g++ && \
ln -s /usr/bin/${CXX} ${MY_HOME}/bin/x86_64-linux-gnu-g++
# compiling a specific version of V8 and PyV8, since older combos lead to memory leaks
RUN git clone https://github.com/muellermichel/V8_r10452.git $V8_HOME && \
git clone https://github.com/muellermichel/PyV8_r429.git $MY_LIB/pyv8 && \
cd $MY_LIB/pyv8 && python setup.py build && python setup.py install
# cleaning up
RUN PATH=${PATH_BEFORE_V8} && \
head -n -2 /etc/apt/sources.list > ${MY_HOME}/sources.list.temp && \
mv ${MY_HOME}/sources.list.temp /etc/apt/sources.list && \
apt-get update
ENV PATH "${PATH_BEFORE_V8}"
ENV CC ""
ENV CPP ""
ENV CXX ""
older version that depends on the now defunct googlecode and was made for Ubuntu 12.04:
export MY_LIB_FOLDER=[PUT-YOUR-DESIRED-INSTALL-PATH-HERE]
apt-get install -y libboost-thread-dev
apt-get install -y libboost-all-dev
apt-get install -y libboost-dev
apt-get install -y libboost-python-dev
apt-get install -y git-core autoconf libtool systemtap
apt-get install -y subversion
apt-get install -y wget
mkdir -p $MY_LIB_FOLDER/boost && cd $MY_LIB_FOLDER/boost && wget http://sourceforge.net/projects/boost/files/boost/1.54.0/boost_1_54_0.tar.gz && tar -xvzf boost_1_54_0.tar.gz
cd $MY_LIB_FOLDER/boost/boost_1_54_0 && ./bootstrap.sh && ./b2 install --prefix=/usr/local --with-python --with-thread && ldconfig && ldconfig /usr/local/lib
svn checkout -r10452 http://v8.googlecode.com/svn/trunk/ $MY_LIB_FOLDER/v8
export V8_HOME=$MY_LIB_FOLDER/v8
svn checkout -r429 http://pyv8.googlecode.com/svn/trunk/ $MY_LIB_FOLDER/pyv8
git clone https://github.com/taguchimail/pyv8-linux-x64.git $MY_LIB_FOLDER/pyv8-taguchimail && cd $MY_LIB_FOLDER/pyv8-taguchimail && git checkout origin/stable
apt-get install -y scons
cd $MY_LIB_FOLDER/pyv8 && patch -p0 < $MY_LIB_FOLDER/pyv8-taguchimail/patches/pyv8.patch && python setup.py build && python setup.py install
Had the same problem and this worked for me:
export LIB=~
apt-get install -y curl libboost-thread-dev libboost-all-dev libboost-dev libboost-python-dev git-core autoconf libtool
svn checkout -r19632 http://v8.googlecode.com/svn/trunk/ $LIB/v8
export V8_HOME=$LIB/v8
svn checkout http://pyv8.googlecode.com/svn/trunk/ $LIB/pyv8 && cd $LIB/pyv8 && python setup.py build && python setup.py install
Solution found in comments here - https://code.google.com/p/pyv8/wiki/HowToBuild
I'm using a Debian based distro. Here's how I installed PyV8 (you'll need to
have git installed):
cd /usr/share
sudo git clone https://github.com/emmetio/pyv8-binaries.git
cd pyv8-binaries/
sudo unzip pyv8-linux64.zip
sudo cp -a PyV8.py _PyV8.so /usr/bin