Install OpenCV with CUDA in Docker but with very little memory - python

For a Docker container I am making a build of OpenCV which should work together with CUDA.
Here is an excerpt from the Dockerfile:
ARG OPENCV_VERSION=4.7.0
RUN cd /opt/ &&\
# Download and unzip OpenCV and opencv_contrib and delte zip files
wget https://github.com/opencv/opencv/archive/$OPENCV_VERSION.zip &&\
unzip $OPENCV_VERSION.zip &&\
rm $OPENCV_VERSION.zip &&\
wget https://github.com/opencv/opencv_contrib/archive/$OPENCV_VERSION.zip &&\
unzip ${OPENCV_VERSION}.zip &&\
rm ${OPENCV_VERSION}.zip &&\
# Create build folder and switch to it
mkdir /opt/opencv-${OPENCV_VERSION}/build && cd /opt/opencv-${OPENCV_VERSION}/build &&\
# Cmake configure
cmake \
-DOPENCV_EXTRA_MODULES_PATH=/opt/opencv_contrib-${OPENCV_VERSION}/modules \
-DWITH_CUDA=ON \
-DCMAKE_BUILD_TYPE=RELEASE \
# Install path will be /usr/local/lib (lib is implicit)
-DCMAKE_INSTALL_PREFIX=/usr/local \
.. &&\
# Make
make -j"$(nproc)" && \
# Install to /usr/local/lib
make install && \
ldconfig &&\
# Remove OpenCV sources and build folder
rm -rf /opt/opencv-${OPENCV_VERSION} && rm -rf /opt/opencv_contrib-${OPENCV_VERSION}
So far this works, but I have to create a very small container. If I now want to use OpenCV with CUDA, I end up with 2.7GB with all the data, etc.
I would now like to get this smaller. My idea would be to omit the extensions of OpenCV, but then I can no longer create CUDA with OpenCV. Since there are probably libraries in there which are needed.
Is there a way to do this without the libraries or can I delete them and "tell" OpenCV not to look for them? If I just leave them out there are a lot of error messages in OpenCV.
Here is the whole Dockerfile, it is not perfect:
FROM ghcr.io/ifm/ifm3d:latest-l4t-arm64 AS buildstage
COPY requirements.txt /tmp/
COPY python /tmp/python
USER root
ARG JETPACK_VERSION_BASE="r32.4"
ARG JETPACK_VERSION="${JETPACK_VERSION_BASE}.3"
ARG BASE_IMAGE="nvcr.io/nvidia/l4t-base:${JETPACK_VERSION}"
ARG SOC="t186"
ADD --chown=root:root https://repo.download.nvidia.com/jetson/jetson-ota-public.asc /etc/apt/trusted.gpg.d/jetson-ota-public.asc
RUN chmod 644 /etc/apt/trusted.gpg.d/jetson-ota-public.asc \
&& apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
&& echo "deb https://repo.download.nvidia.com/jetson/common ${JETPACK_VERSION_BASE} main" > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
&& echo "deb https://repo.download.nvidia.com/jetson/${SOC} ${JETPACK_VERSION_BASE} main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
&& cat /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
&& apt-get update \
&& rm -rf /var/lib/apt/lists/*
ARG CUDA=10.2
ENV CUDA=${CUDA}
ENV PATH /usr/local/cuda-$CUDA/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/cuda-$CUDA/targets/aarch64-linux/lib:${LD_LIBRARY_PATH}
RUN ldconfig
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all
ARG OPENCV_VERSION=4.7.0
RUN apt-get update && \
apt-get install -y \
bc \
bzip2 \
language-pack-en-base \
python3-distutils \
python3-pip \
build-essential \
cuda-libraries-dev-${CUDA} \
cuda-cudart-dev-${CUDA} \
cuda-compiler-${CUDA} \
libnvinfer-samples \
ca-certificates \
python-dev \
git \
cmake \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev \
libxine2-dev \
libglew-dev \
libtiff5-dev \
zlib1g-dev \
libjpeg-dev \
libavcodec-dev \
libavformat-dev \
libavutil-dev \
libpostproc-dev \
libswscale-dev \
libeigen3-dev \
libtbb-dev \
libgtk2.0-dev \
pkg-config \
## Python
python3-numpy \
libopencv-dev \
g++ \
libcudnn8 \
nvidia-cudnn8 \
libsm6 \
libxrender-dev \
&& rm -rf /var/lib/apt/lists/*
USER ifm
RUN python3 -m pip install --upgrade pip
RUN pip3 install --requirement /tmp/requirements.txt
USER root
ARG OPENCV_VERSION=4.7.0
RUN cd /opt/ &&\
# Download and unzip OpenCV and opencv_contrib and delte zip files
wget https://github.com/opencv/opencv/archive/$OPENCV_VERSION.zip &&\
unzip $OPENCV_VERSION.zip &&\
rm $OPENCV_VERSION.zip &&\
wget https://github.com/opencv/opencv_contrib/archive/$OPENCV_VERSION.zip &&\
unzip ${OPENCV_VERSION}.zip &&\
rm ${OPENCV_VERSION}.zip &&\
# Create build folder and switch to it
mkdir /opt/opencv-${OPENCV_VERSION}/build && cd /opt/opencv-${OPENCV_VERSION}/build &&\
# Cmake configure
cmake \
-DOPENCV_EXTRA_MODULES_PATH=/opt/opencv_contrib-${OPENCV_VERSION}/modules \
-DWITH_CUDA=ON \
-DCMAKE_BUILD_TYPE=RELEASE \
# Install path will be /usr/local/lib (lib is implicit)
-DCMAKE_INSTALL_PREFIX=/usr/local \
.. &&\
# Make
make -j"$(nproc)" && \
# Install to /usr/local/lib
make install && \
ldconfig &&\
# Remove OpenCV sources and build folder
rm -rf /opt/opencv-${OPENCV_VERSION} && rm -rf /opt/opencv_contrib-${OPENCV_VERSION}
#RUN ls /usr/local/lib/libopencv*
FROM ghcr.io/ifm/ifm3d:latest-l4t-arm64
USER root
#COPY --from=buildstage /usr/local/include/opencv4/ /usr/local/include/opencv4/
COPY --from=buildstage /home/ifm/venv/lib/python3.9/site-packages /home/ifm/venv/lib/python3.9/site-packages
COPY --from=buildstage /usr/local/lib/python3.9/site-packages/cv2/ /home/ifm/venv/lib/python3.9/site-packages/cv2/
COPY --from=buildstage /usr/local/lib/libopencv* /usr/local/lib/
COPY --from=buildstage /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/
#RUN ldconfig
COPY python /tmp/python
The basis container it's a l4t for TX2
Several try to build the container, but currently only OpenCV works without CUDA

Related

How to install Python 3.9.10 inside a container? Can't use the python images

I must run some code inside a container, but the code requires Python 3.9.10, and I am struggling to install version 3.9.10.
I can modify the Dockerfile, but I can't change the base image (which is based on Ubuntu) to python/whatever.
I tried apt install, but the installed version is 3.9.5.
I also tried using conda to install version 3.9.10 but is incredibly painful to activate its environments automatically.
Suggestions?
I ended up compiling the version I needed and changing the system defaults, heres my Makefile.
FROM nvidia/cuda:11.4.2-cudnn8-runtime-ubuntu20.04
ENV TZ=America/Sao_Paulo
ENV DEBIAN_FRONTEND=noninteractive
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
RUN apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
build-essential \
curl \
git \
libbz2-dev \
libffi-dev \
liblzma-dev \
libncursesw5-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
libxml2-dev \
libxmlsec1-dev \
llvm \
make \
tk-dev \
wget \
xz-utils \
zlib1g-dev \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
# compile python 3.9.10
RUN git clone --depth 1 https://github.com/python/cpython.git --branch v3.9.10 \
&& cd /cpython \
&& ./configure --enable-optimizations \
&& make \
&& make install \
&& update-alternatives --install /usr/bin/python python /usr/local/bin/python3 999 \
&& rm -rf /cpython
# ...
# commands that require both Python 3.9.10 and "nvidia stuff"
# ...

Azure functions using python docker file

I have followed the tutorial for the Azure Functions using python.
everything wen smooth.
for the next step I need to add a C compiled dependency.
I just added the C compiler + the dependency script rows.
I have edited the Docker file and it now looks like this:
FROM mcr.microsoft.com/azure-functions/python:3.0-python3.7
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY requirements.txt /
RUN pip install -r /requirements.txt
COPY . /home/site/wwwroot
FROM julia:1.3
RUN apt-get update && apt-get install -y gcc g++ && rm -rf /var/lib/apt/lists/*
FROM python:3.7
RUN pip install numpy
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
When I build this docker file it look good.
but when I run it it just opens up a GCC promp.
What am I doing wrong?
Thanks
I found an issue with your multi stage FROM statements. Also, you needed to add apt-get install make.
The following works:
FROM mcr.microsoft.com/azure-functions/python:3.0-python3.7
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY requirements.txt /
RUN pip install -r /requirements.txt
COPY . /home/site/wwwroot
# Adding "apt-get install make" here
RUN apt-get update && apt-get install make && apt-get install -y gcc g++ && rm -rf /var/lib/apt/lists/*
RUN pip install numpy
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz

Build Python and Pip from source and "bad interpreter: No such file or directory"

I am trying to build Python and Pip from source. Following is the Dockerfile for it:
FROM buildpack-deps:stretch
ARG PYTHON_VERSION="3.6.5"
ARG PIP_VERSION="20.2"
ARG INSTALLATION_PREFIX="/opt/python/${PYTHON_VERSION}"
WORKDIR /usr/src/python
RUN wget -q https://www.python.org/ftp/python/$PYTHON_VERSION/Python-${PYTHON_VERSION}.tgz \
&& apt-get update \
&& apt-get install -y \
make \
build-essential \
libssl-dev \
zlib1g-dev \
libbz2-dev \
libreadline-dev \
libsqlite3-dev \
wget \
curl \
llvm \
libncurses5-dev \
libncursesw5-dev \
xz-utils \
tk-dev
RUN tar xvf Python-${PYTHON_VERSION}.tgz \
&& rm -f Python-${PYTHON_VERSION}.tgz
RUN cd Python-$PYTHON_VERSION \
&& ./configure \
--prefix=${INSTALLATION_PREFIX} \
--build=$(dpkg-architecture --query DEB_BUILD_GNU_TYPE) \
--enable-loadable-sqlite-extensions \
--enable-shared \
--with-system-expat \
--with-system-ffi \
--without-ensurepip \
&& make -j $(nproc) \
&& make altinstall
ENV LD_LIBRARY_PATH="${INSTALLATION_PREFIX}/lib:$LD_LIBRARY_PATH"
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O /get-pip.py \
&& /usr/src/python/Python-3.6.5/python /get-pip.py \
--prefix ${INSTALLATION_PREFIX} \
--disable-pip-version-check \
--no-cache-dir \
--no-warn-script-location \
pip==${PIP_VERSION}
WORKDIR /
RUN mv ${INSTALLATION_PREFIX} /tmp/test
ENV LD_LIBRARY_PATH="/tmp/test/lib:$LD_LIBRARY_PATH"
ENV PATH="/tmp/test/bin:$PATH"
When I build the above Dockerfile, it builds fine and I am able to do python3.6 --version and pip --version and I get the expected results.
Now because of a requirement that I have(Imagine building different Python versions from source in a docker image and store these built runtimes somewhere and install them on demand to a location before using it), I need to move this built Python and Pip to a different location, for example /tmp/test...I was hoping that Python and Pip executables would work just as before but when I do a pip --version I get the following error:
root#12c243458190:/# pip --version
bash: /tmp/test/bin/pip: /opt/python/3.6.5/bin/python3.6: bad interpreter: No such file or directory
pip seems to have hard-coded the location of Python to the installation prefix it was given when it was built.
Any ideas on how to fix this issue?

Docker opencv3 Cmake errors

I tried to use docker to build up a Python3 + OpenCV3 with ffmpeg enabled environment.
Since I also want to use GPU to speed up the model, I built using NVIDIA-docker image.
Here is my Dockerfile:
FROM nvidia/cuda:8.0-cudnn5-devel
...
...
#############################################
# OpenCV 3 w/ Python 2.7 from Anaconda
#############################################
RUN cd ~/ &&\
git clone https://github.com/opencv/opencv.git &&\
git clone https://github.com/opencv/opencv_contrib.git &&\
cd opencv && mkdir build && cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/opt/opencv \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON \
-D PYTHON_DEFAULT_EXECUTABLE=/opt/conda/bin/python2.7 BUILD_opencv_python2=True \
-D PYTHON_LIBRARY=/opt/conda/lib/libpython2.7.so \
-D PYTHON_INCLUDE_DIR=/opt/conda/include/python2.7 \
-D PYTHON2_NUMPY_INCLUDE_DIRS=/opt/conda/lib/python2.7/site-packages/numpy/core/include \
-D PYTHON_EXECUTABLE=/opt/conda/bin/python2.7 -DWITH_FFMPEG=ON \
-D BUILD_SHARED_LIBS=ON .. &&\
make -j4 && make install && ldconfig
ENV PYTHONPATH /opt/opencv/lib/python2.7/site-packages:$PYTHONPATH
Then I found it's finally got an error, which seems couldn't locate the PATH CUDA_CUDA_LIBRARY, since those part was configured in the image nvidia/cuda:8.0-cudnn5-devel, how can I deal with this error?
This is the error message:
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_CUDA_LIBRARY (ADVANCED)
linked by target "example_gpu_stereo_match" in directory /root/opencv/samples/gpu
linked by target "example_gpu_bgfg_segm" in directory /root/opencv/samples/gpu
linked by target "example_gpu_morphology" in directory /root/opencv/samples/gpu
linked by target "example_gpu_pyrlk_optical_flow" in directory /root/opencv/samples/gpu
linked by target "example_gpu_video_reader" in directory /root/opencv/samples/gpu
linked by target "example_gpu_surf_keypoint_matcher" in directory /root/opencv/samples/gpu
linked by target "example_gpu_farneback_optical_flow" in directory /root/opencv/samples/gpu
linked by target "example_gpu_hog" in directory /root/opencv/samples/gpu
linked by target "example_gpu_optical_flow" in directory /root/opencv/samples/gpu
linked by target "example_gpu_houghlines" in directory /root/opencv/samples/gpu
linked by target "example_gpu_driver_api_stereo_multi" in directory /root/opencv/samples/gpu
linked by target "example_gpu_cascadeclassifier" in directory /root/opencv/samples/gpu
linked by target "example_gpu_super_resolution" in directory /root/opencv/samples/gpu
linked by target "example_gpu_generalized_hough" in directory /root/opencv/samples/gpu
linked by target "example_gpu_driver_api_multi" in directory /root/opencv/samples/gpu
linked by target "example_gpu_opticalflow_nvidia_api" in directory /root/opencv/samples/gpu
linked by target "example_gpu_stereo_multi" in directory /root/opencv/samples/gpu
linked by target "example_gpu_video_writer" in directory /root/opencv/samples/gpu
linked by target "example_gpu_multi" in directory /root/opencv/samples/gpu
linked by target "example_gpu_cascadeclassifier_nvidia_api" in directory /root/opencv/samples/gpu
linked by target "example_gpu_alpha_comp" in directory /root/opencv/samples/gpu
-- Configuring incomplete, errors occurred!
Upate the whole Dockerfile
FROM nvidia/cuda:8.0-cudnn5-devel
MAINTAINER jiandong <jjdblast#gmail.com>
ARG THEANO_VERSION=rel-0.8.2
ARG TENSORFLOW_VERSION=0.8.0
ARG TENSORFLOW_ARCH=gpu
ARG KERAS_VERSION=1.0.3
#RUN echo -e "\n**********************\nNVIDIA Driver Version\n**********************\n" && \
# cat /proc/driver/nvidia/version && \
# echo -e "\n**********************\nCUDA Version\n**********************\n" && \
# nvcc -V && \
# echo -e "\n\nBuilding your Deep Learning Docker Image...\n"
# Necessary packages and FFmpeg
RUN apt-get update && apt-get install -y \
apt-utils \
autoconf \
automake \
bc \
bzip2 \
build-essential \
ca-certificates \
cmake \
curl \
ffmpeg \
g++ \
gfortran \
git \
libass-dev \
libatlas-base-dev \
libavcodec-dev \
libavformat-dev \
libavresample-dev \
libav-tools \
libdc1394-22-dev \
libffi-dev \
libfreetype6-dev \
libglib2.0-0 \
libhdf5-dev \
libjasper-dev \
libjpeg-dev \
liblapack-dev \
liblcms2-dev \
libopenblas-dev \
libopencv-dev \
libopenjpeg5 \
libpng12-dev \
libsdl1.2-dev \
libsm6 \
libssl-dev \
libtheora-dev \
libtiff5-dev \
libtool \
libva-dev \
libvdpau-dev \
libvorbis-dev \
libvtk6-dev \
libwebp-dev \
libxcb1-dev \
libxcb-shm0-dev \
libxcb-xfixes0-dev \
libxext6 \
libxrender1 \
libzmq3-dev \
nano \
pkg-config \
python-dev \
python-pycurl \
software-properties-common \
texinfo \
unzip \
vim \
webp \
wget \
zlib1g-dev \
&& \
apt-get clean && \
apt-get autoremove && \
rm -rf /var/lib/apt/lists/* && \
# Link BLAS library to use OpenBLAS using the alternatives mechanism (https://www.scipy.org/scipylib/building/linux.html#debian-ubuntu)
update-alternatives --set libblas.so.3 /usr/lib/openblas-base/libblas.so.3
# Install pip
RUN curl -O https://bootstrap.pypa.io/get-pip.py && \
python get-pip.py && \
rm get-pip.py
# Add SNI support to Python
RUN pip --no-cache-dir install \
pyopenssl \
ndg-httpsclient \
pyasn1
#############################################
# Anaconda Python 2.7
#############################################
# RUN echo 'export PATH=/opt/conda/bin:$PATH' > /etc/profile.d/conda.sh && \
# wget https://repo.continuum.io/archive/Anaconda2-4.2.0-Linux-x86_64.sh -O ~/anaconda.sh && \
# /bin/bash ~/anaconda.sh -b -p /opt/conda && \
# rm ~/anaconda.sh
ADD Anaconda2-4.2.0-Linux-x86_64.sh /root/anaconda.sh
RUN echo 'export PATH=/opt/conda/bin:$PATH' > /etc/profile.d/conda.sh && \
/bin/bash /root/anaconda.sh -b -p /opt/conda && \
rm /root/anaconda.sh
ENV PATH /opt/conda/bin:$PATH
RUN conda update -y conda && \
conda update -y numpy && \
conda update -y scipy && \
conda update -y pandas && \
conda update -y matplotlib && \
conda update -y requests && \
conda install -c conda-forge pika=0.10.0 && \
conda install scikit-image && \
pip install --upgrade pip && \
pip install --upgrade git+git://github.com/Theano/Theano.git && \
pip install pyscenedetect --upgrade --no-dependencies
# Configuration file for theano
RUN echo -e "[global]\nfloatX = float32\ndevice = cpu\nopenmp = True" >> ~/.theanorc
#############################################
# OpenCV 3 w/ Python 2.7 from Anaconda
#############################################
RUN cd ~/ &&\
git clone https://github.com/opencv/opencv.git &&\
git clone https://github.com/opencv/opencv_contrib.git &&\
cd opencv && mkdir build && cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CUDA_CUDA_LIBRARY=/usr/local/cuda-8.0/targets/x86_64-linux/lib/stubs/libcuda.so \
-D CMAKE_INSTALL_PREFIX=/opt/opencv \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON \
-D PYTHON_DEFAULT_EXECUTABLE=/opt/conda/bin/python2.7 BUILD_opencv_python2=True \
-D PYTHON_LIBRARY=/opt/conda/lib/libpython2.7.so \
-D PYTHON_INCLUDE_DIR=/opt/conda/include/python2.7 \
-D PYTHON2_NUMPY_INCLUDE_DIRS=/opt/conda/lib/python2.7/site-packages/numpy/core/include \
-D PYTHON_EXECUTABLE=/opt/conda/bin/python2.7 -DWITH_FFMPEG=ON \
-D BUILD_SHARED_LIBS=ON .. &&\
make -j4 && make install && ldconfig
ENV PYTHONPATH /opt/opencv/lib/python2.7/site-packages:$PYTHONPATH
# Jupyter
python -m ipykernel.kernelspec
# Install TensorFlow
RUN pip --no-cache-dir install \
https://storage.googleapis.com/tensorflow/linux/${TENSORFLOW_ARCH}/tensorflow-${TENSORFLOW_VERSION}-cp27-none-linux_x86_64.whl
# Install Theano and set up Theano config (.theanorc) for CUDA and OpenBLAS
RUN pip --no-cache-dir install git+git://github.com/Theano/Theano.git#${THEANO_VERSION} && \
\
echo "[global]\ndevice=gpu\nfloatX=float32\noptimizer_including=cudnn\nmode=FAST_RUN \
\n[lib]\ncnmem=0.95 \
\n[nvcc]\nfastmath=True \
\n[blas]\nldflag = -L/usr/lib/openblas-base -lopenblas \
\n[DebugMode]\ncheck_finite=1" \
> /root/.theanorc
# Install Keras
RUN pip --no-cache-dir install git+git://github.com/fchollet/keras.git#${KERAS_VERSION}
# Set up notebook config
COPY jupyter_notebook_config.py /root/.jupyter/
# Jupyter has issues with being run directly: https://github.com/ipython/ipython/issues/7062
COPY run_jupyter.sh /root/
# Expose Ports for TensorBoard (6006), Ipython (8888)
EXPOSE 6006 8888
WORKDIR "/root"
CMD ["/bin/bash"]
Update
After I tried add the -D CUDA_CUDA_LIBRARY=/usr/local/cuda-8.0/targets/x86_64-linux/lib/stubs/libcuda.so in my Dockerfile cmake command, I got this error:
[ 15%] Linking CXX static library ../../lib/libopencv_perf_stereo_pch_dephelp.a
/usr/bin/cmake: error while loading shared libraries: libkrb5.so.3: failed to map segment from shared object
modules/stereo/CMakeFiles/opencv_perf_stereo_pch_dephelp.dir/build.make:94: recipe for target 'lib/libopencv_perf_stereo_pch_dephelp.a' failed
make[2]: *** [lib/libopencv_perf_stereo_pch_dephelp.a] Error 127
CMakeFiles/Makefile2:19133: recipe for target 'modules/stereo/CMakeFiles/opencv_perf_stereo_pch_dephelp.dir/all' failed
make[1]: *** [modules/stereo/CMakeFiles/opencv_perf_stereo_pch_dephelp.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 15%] Linking CXX static library ../../lib/libopencv_stereo_pch_dephelp.a
[ 15%] Built target opencv_stereo_pch_dephelp
[ 15%] Linking CXX static library ../../lib/libopencv_test_stereo_pch_dephelp.a
[ 15%] Built target opencv_test_stereo_pch_dephelp
[ 15%] Linking CXX static library ../../lib/libopencv_superres_pch_dephelp.a
[ 15%] Built target opencv_superres_pch_dephelp
make: *** [all] Error 2
Makefile:160: recipe for target 'all' failed
After inspecting the Docker image for nvidia/cuda:8.0-cudnn5-devel, it seems that you need to add the following argument to cmake:
-DCUDA_CUDA_LIBRARY=/usr/local/cuda-8.0/targets/x86_64-linux/lib/stubs/libcuda.so
If anyone using Ubuntu 16.04, Add -DCUDA_CUDA_LIBRARY=/usr/local/cuda-8.0/lib64/stubs/libcuda.so to you CMAKE.

Create Docker container with Nodev4.4.7 and Python3

Trying to create a docker image that has Python3 and Node v4.4.7 so that I can use it as a container for my project that needs both Python and that version of Node.
# Pull base image.
FROM python:3-onbuild
CMD [ "python", "./hello.py" ]
# Install Node.js
RUN \
cd /tmp && \
wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && \
tar xvzf node-v4.4.7.tar.gz && \
rm -f node-v4.4.7.tar.gz && \
cd node-v* && \
./configure && \
CXX="g++ -Wno-unused-local-typedefs" make && \
CXX="g++ -Wno-unused-local-typedefs" make install && \
cd /tmp && \
rm -rf /tmp/node-v* && \
npm install -g npm && \
print '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
When I first tried it complained about not having a python script to run so added a basic python file: hello.py
that just has this:
print "Hello, Python!"
Then it complains about not having a requirements.txt file so added an empty requirements.txt
Now when I run docker build -t isaacweathersnet/sampledockerimage . it snafus during the node install with
node-v4.4.0/benchmark/arrays/zero-int.js
File "./configure", line 446
'''
^
SyntaxError: Missing parentheses in call to 'print'
The command '/bin/sh -c cd /tmp && wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && tar xvzf node-v4.4.7.tar.gz && rm -f node-v4.4.7.tar.gz && cd node-v* && ./configure && CXX="g++ -Wno-unused-local-typedefs" make && CXX="g++ -Wno-unused-local-typedefs" make install && cd /tmp && rm -rf /tmp/node-v* && npm install -g npm && print '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc' returned a non-zero code: 1
Found the solution on Github that had Python and Node. No luck with Python 3+ but worked well with 2.7
https://github.com/nsdont/python-node/blob/master/Dockerfile
FROM python:2.7
RUN \
cd /tmp && \
wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && \
tar xvzf node-v4.4.7.tar.gz && \
rm -f node-v4.4.7.tar.gz && \
cd node-v* && \
./configure && \
CXX="g++ -Wno-unused-local-typedefs" make && \
CXX="g++ -Wno-unused-local-typedefs" make install && \
cd /tmp && \
rm -rf /tmp/node-v* && \
npm install -g npm && \
echo -e '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
There are nodejs-python and python-nodejs (which is built on top of nodejy-python). It's worth to have a look into there.
python-nodejs provides Node 10.x, npm 6.x, yarn stable, Python latest, pip latest and pipenv latest. The versions used should be adjustable to your version needs. Use the Dockerfile as basis and adjust the RUN section
RUN \
echo "deb https://deb.nodesource.com/node_10.x stretch main" > /etc/apt/sources.list.d/nodesource.list && \
wget -qO- https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list && \
wget -qO- https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
apt-get update && \
apt-get install -yqq nodejs yarn && \
pip install -U pip && pip install pipenv && \
npm i -g npm#^6 && \
rm -rf /var/lib/apt/lists/*
to the Node version you need. The yarn (dependency management alternative to nmp) and in case you need yarn) part can be removed.

Categories

Resources