I am trying to modify this docker compose setup to install a number of python packages including yfinance. In my mind there are three ways to install packages and i'd like to be able to use each one in docker:
install from pip requirements file
install from conda environments.yml file
install manually in environment by running install command (pip install yfinance or conda install -c conda-forge beautifulsoup4)
Here is a list of the problems I am having trying to modify this setup:
pip requirements file. - After altering this file the packages don't appear to be installed and instead the file is overridden with defaults on github.
conda environment file - Unable to create environment and install packages from environment.yml
manual install of package - entering a docker container shell using docker exec -it <containername> /bin/bash results in pip and conda returning command not found in bash.
Results so far:
All of the above methods have resulted in errors including 'command not found' or 'no module named yfinance' when importing in a notebook.
The only way I've been able to have any success so far is opening a notebook in the browser at localhost:8888 and creating a new notebook and running !pip install yfinance. However importing and executing the following code also results in an error making me think the package or dependencies haven't installed correctly.
import yfinance as yf
m = yf.Ticker("MSFT")
m.info
Here is my docker-compose file service
jupyter:
image: cmihai/jupyter:v1
container_name: 'joshnotebook'
hostname: jupyter
restart: 'always'
ports:
- '8888:8888'
env_file:
- notebook/config/jupyter.env
volumes:
- ${USERDIR}/docker/notebook:/home/user/notebook
build: ./notebook/services/jupyter
Here is my Dockerfile
FROM cmihai/python:v1
LABEL name="cmihai/jupyterlab" \
version="1" \
maintainer="Mihai Criveti <crivetimihai#gmail.com>" \
description="Anaconda Python with Jupyter Lab\
This container installs Anaconda Python 3 and Jupyter Lab."
SHELL ["/bin/bash", "-l", "-c"]
ENV DEBIAN_FRONTEND noninteractive
# PARAMETERS
ARG OS_PACKAGES="tzdata curl graphviz vim-nox gosu mc sqlite3 bash-completion sudo"
#ARG LATEX_PACKAGES="texlive texlive-latex-base texlive-latex-extra texlive-xetex texlive-generic-recommended texlive-fonts-extra texlive-fonts-recommended pandoc"
ARG CONDA_PACKAGES="jupyterlab pandas"
COPY condaenvironment.yml /tmp/condaenvironment.yml
# ROOT
USER root
# INSTALL OS PACKAGES
RUN apt-get update \
&& apt-get --no-install-recommends install --yes ${OS_PACKAGES} \ # ${LATEX_PACKAGES}\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN conda install -c ranaroussi yfinance
# SWITCH TO REGULAR USER
RUN mkdir /home/user/notebook && chown -R user:user /home/user/notebook \
&& echo 'user ALL=NOPASSWD: ALL' >> /etc/sudoers
USER user
# INSTALL CONDA PACKAGES
RUN . ~/.profile \
&& unset SUDO_UID SUDO_GID SUDO_USER \
&& conda install --quiet --yes ${CONDA_PACKAGES} \
&& conda install -c anaconda requests \
&& conda install -c conda-forge lxml \
&& conda install -c conda-forge beautifulsoup4 \
&& conda install -c conda-forge sqlalchemy \
&& conda install -c anaconda mysql-connector-python \
&& conda install -c conda-forge seaborn \
&& conda install -c conda-forge altair \
&& conda clean -y -all
# INSTALL PIP PACKAGES
RUN . ~/.profile \
&& python -m pip install --no-cache-dir --upgrade pip \
&& python -m pip install yfinance
# WORKDIR
WORKDIR /home/user
# JUPYTERLAB EXTENSIONS
RUN jupyter labextension install jupyterlab-drawio \
&& jupyter labextension install jupyterlab_bokeh \
&& jupyter labextension install plotlywidget \
&& jupyter labextension install #jupyterlab/plotly-extension \
&& jupyter labextension install jupyterlab-chart-editor \
&& jupyter labextension install #jupyterlab/git \
&& jupyter serverextension enable --py jupyterlab_git \
&& jupyter labextension install #jupyterlab/latex \
&& jupyter labextension install #jupyterlab/toc \
&& jupyter labextension install #oriolmirosa/jupyterlab_materialdarker \
&& jupyter labextension install #jupyterlab/geojson-extension \
&& jupyter lab build
# COMMAND and ENTRYPOINT:
COPY start-jupyter.sh /home/user/start-jupyter.sh
CMD ["/home/user/start-jupyter.sh"]
Related
I'm trying to make a new image for python dockerfile, but keeps installing Python 3.10 instead of Python 3.8.
My Dockerfile looks like this:
FROM python:3.8.16
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
## follow installation from xtb-python github:
ENV CONDA_DIR /opt/conda
RUN apt-get update && apt-get install -y \
wget \
&& rm -rf /var/lib/apt/lists/*
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh \
&& /bin/bash ~/miniconda.sh -b -p /opt/conda
ENV PATH=$CONDA_DIR/bin:$PATH
RUN conda config --add channels conda-forge \
&& conda install -y -c conda-forge ...
I don't know much about conda (but have to use it).
It is Conda messing with the my Python or I did it something wrong?
I'm trying to build a docker image like
FROM ubuntu:latest
RUN apt update && apt upgrade -y && \
apt install -y git wget libsuitesparse-dev gcc g++ swig && \
cd ~ && wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
sh Miniconda3-latest-Linux-x86_64.sh -b && rm Miniconda3-latest-Linux-x86_64.sh && \
PATH=$PATH:~/miniconda3/condabin && \
conda init bash && conda upgrade -y conda && /bin/bash -c "source ~/.bashrc" && \
pip install numpy scipy matplotlib scikit_umfpack
However, /bin/bash -c "source ~/.bashrc" does not work... so I got /bin/sh: 1: pip: not found
How can I build a docker image installing miniconda and python requirements using pip at the same time?
I would recommend using a pre-existing Docker image that already has Anaconda installed. For example, this link has a Docker image endorsed by Anaconda itself. There may be others on Dockerhub that also have Anaconda installed already. In the case you already tried an image with Anaconda and it didn't meet your needs, let me know.
Since GPU support with tensorflow-nightly is currently broken on Google Colab I'm trying to build my own docker image for development. However, when I install the object_detection package from tensorflow/models my nightly tensorflow package is overwritten by the version pulled in as a dependency from the object_detection setup.py.
I'm following essentially the same steps in Google Colab but my tensorflow nightly isn't overwritten there, so I'm not sure what I'm missing...
Here's my Dockerfile:
FROM tensorflow/tensorflow:nightly-gpu-jupyter
RUN python -c "import tensorflow as tf; print(f'Tensorflow version: {tf.__version__}')"
RUN apt-get install -y \
curl \
git \
less \
zip
RUN curl -L -O https://github.com/protocolbuffers/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_64.zip && unzip protoc-3.11.4-linux-x86_64.zip
RUN cp bin/protoc /usr/local/bin
RUN git clone --depth 1 https://github.com/tensorflow/models
RUN cd models/research && \
protoc object_detection/protos/*.proto --python_out=. && \
cp object_detection/packages/tf2/setup.py . && \
python -m pip install .
RUN python -c "import tensorflow as tf; print(f'Tensorflow version: {tf.__version__}')"
which I'm building with:
docker pull tensorflow/tensorflow:nightly-gpu-jupyter
docker build --no-cache . -f models-tf-nightly.Dockerfile -t tf-nightly-models
The first print() shows:
Tensorflow version: 2.5.0-dev20201129
but the second one shows:
Tensorflow version: 2.3.1
In Google Colab I'm doing essentially the same steps:
# Install the Object Detection API
%%bash
pip install tf-nightly-gpu
[[ -d models ]] || git clone --depth 1 https://github.com/tensorflow/models
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
After which
import tensorflow as tf
print(tf.__version__)
prints 2.5.0-dev20201201
So somehow my Google Colab steps are preserving my nightly Tensorflow install, whereas on Docker it gets overwritten with 2.3.0.
If you look at pip list before installing the object detection package, you will see that tf-nightly-gpu is installed but tensorflow is not. When you install the object detection package, the tensorflow package is pulled in as a dependency. pip thinks it is not installed, so it installs it.
One way around this is to trick pip install thinking that the tensorflow package is installed. One can do this by symlinking the tf_nightly_gpu-VERSION.dist-info directory in dist-packages. I have added the lines to do this in the Dockerfile below. At the bottom of this post, I have also included a Dockerfile which implements some best practices to minimize image size.
FROM tensorflow/tensorflow:nightly-gpu-jupyter
RUN python -c "import tensorflow as tf; print(f'Tensorflow version: {tf.__version__}')"
RUN apt-get install -y \
curl \
git \
less \
zip
# Trick pip into thinking that the 'tensorflow' package is installed.
# Installing `object_detection` attempts to install the 'tensorflow' package.
# Name the symlink with the suffix from tf_nightly_gpu.
WORKDIR /usr/local/lib/python3.6/dist-packages
RUN ln -s tf_nightly_gpu-* tensorflow-$(ls -d1 tf_nightly_gpu* | sed 's/tf_nightly_gpu-\(.*\)/\1/')
WORKDIR /tf
RUN curl -L -O https://github.com/protocolbuffers/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_64.zip && unzip protoc-3.11.4-linux-x86_64.zip
RUN cp bin/protoc /usr/local/bin
RUN git clone --depth 1 https://github.com/tensorflow/models
RUN cd models/research && \
protoc object_detection/protos/*.proto --python_out=. && \
cp object_detection/packages/tf2/setup.py . && \
python -m pip install .
RUN python -c "import tensorflow as tf; print(f'Tensorflow version: {tf.__version__}')"
Here is a Dockerfile that leads to a slightly smaller image (0.22 GB uncompressed). Notable changes are clearing the apt lists and using --no-cache-dir in pip install.
FROM tensorflow/tensorflow:nightly-gpu-jupyter
RUN python -c "import tensorflow as tf; print(f'Tensorflow version: {tf.__version__}')"
RUN apt-get install -y --no-install-recommends \
ca-certificates \
curl \
git \
less \
zip && \
rm -rf /var/lib/apt/lists/*
# Trick pip into thinking that the 'tensorflow' package is installed.
# Installing `object_detection` attempts to install the 'tensorflow' package.
# Name the symlink with the suffix from tf_nightly_gpu.
WORKDIR /usr/local/lib/python3.6/dist-packages
RUN ln -s tf_nightly_gpu-* tensorflow-$(ls -d1 tf_nightly_gpu* | sed 's/tf_nightly_gpu-\(.*\)/\1/')
WORKDIR /tf
RUN curl -L -O https://github.com/protocolbuffers/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_64.zip && \
unzip protoc-3.11.4-linux-x86_64.zip && \
cp bin/protoc /usr/local/bin && \
rm -r protoc-3.11.4-linux-x86_64.zip bin/
# Upgrade pip.
RUN python -m pip install --no-cache-dir --upgrade pip
RUN git clone --depth 1 https://github.com/tensorflow/models
WORKDIR models/research
RUN protoc object_detection/protos/*.proto --python_out=. && \
cp object_detection/packages/tf2/setup.py . && \
python -m pip install --no-cache-dir .
RUN python -c "import tensorflow as tf; print(f'Tensorflow version: {tf.__version__}')"
I need to create a Conda environment and install dependencies (Python, R) in this environment.
All libraries - Python and R - are installed well, as far as I see in logs. No errors or warnings.
But it looks like R dependencies from file r_requirements.R are not installed in the same environment (myenvpython).
When I build and use the Docker image, I can use the installed Python libraries in the envirnment, but loading of R libraries fails.
How can I fix it?
FROM conda/miniconda3
COPY code/ci_dependencies.yml /setup/
COPY code/r_requirements.R /setup/
# activate environment
ENV PATH /usr/local/envs/myenvpython/bin:$PATH
RUN apt-get update && \
apt-get -y install sudo
# RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
RUN conda update -n base -c defaults conda && \
conda install python=3.7.5 && \
conda env create -f /setup/ci_dependencies.yml && \
/bin/bash -c "source activate myenvpython" && \
az --version && \
chmod -R 777 /usr/local/envs/myenvpython/lib/python3.7
RUN apt-get install -y libssl-dev libsasl2-dev
RUN Rscript /setup/r_requirements.R
the plan is to deploy pretrained face recog-n model. But before i need to install some libs.
The idea behind docker is that it brings all the needed libs and builds entire 'env' without much overhead. One can just start dockerfile and it runs all other scripts in turn.
libs to install:
Ubuntu 16.04.6 LTS
Python 3.6.10 (3.5.x should be fine also)
OpenCV 3.3.
NumPy
imutils https://github.com/jrosebr1/imutils
dlib http://dlib.net/
face_recognition https://github.com/ageitgey/face_recognition
i m trying to use curl to download pkgs from URLs, but it's not working.
my dockerfile:
FROM ubuntu:16.04.6
RUN apt-get update && apt-get install -y curl bzip2
curl -o numpy
&& sudo apt-get install numpy
&& curl install imutils https://github.com/jrosebr1/imutils
&& curl install dlib https://dlib.net
&& sudo git clone https://github.com/ageitgey/face_recognition.git
&& curl python-opencv https://opencv.org/
&& echo 'export PATH="~/anaconda3/bin:$PATH"' >> ~/.bashrc \
&& ~/anaconda3/bin/conda update -n base conda \
&& rm miniconda_install.sh \
&& rm -rf /var/lib/apt/lists/* \
&& /bin/bash -c "source ~/.bashrc"
ENV PATH="~/anaconda3/bin:${PATH}"
##################################################
# Setup env for current project:
##################################################
EXPOSE 8000
RUN /bin/bash -c "conda create -y -n PYMODEL3.6"
ADD requirements.txt /tmp/setup/requirements.txt
RUN /bin/bash -c "source activate PYMODEL3.6 && pip install -r /tmp/setup/requirements.txt"
WORKDIR /Service
ADD Service /Service
ENTRYPOINT ["/bin/bash", "-c", "source activate PYMODEL3.6 && ./run.sh"]
the face model is pretrained.
there are 2 python files that do actual detection, 128d encoding and recognition.
the usage is like this:
#detect face, if there is face - encode it, return pickle
python3 encode.py --dataset dataset_id --encodings encodings.pickle
--confidence 0.9
#recognize using pickle
python3 face_recognizer.py --encodings encodings.pickle --image
dataset_webcam/3_1.jpg --confidence 0.9 --tolerance 0.5
should I include them in the dockerfile?
I would propose you to use a Dockerfile like the following, assuming you have all your requirements (numpy, imutils, etc...) inside your requirements.txt file, and your encode.py and face_recognizer.py files in your Service folder:
FROM python:3.6.10
RUN mkdir /tmp/setup
ADD requirements.txt /tmp/setup/requirements.txt
RUN pip install --no-cache-dir --upgrade setuptools && \
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir -r /tmp/setup/requirements.txt
WORKDIR /Service
ADD Service /Service/
CMD ["./run.sh"]
EXPOSE 8000