How can I use two images in a single dockerfile - python

Hi all I’m learning docker using the documentation, I have a situation where I’m stuck, and any help on this would be great.
I have a Computer Vision model which is packaged as a microservice using Azure Functions but the model runs in a GPU machine. I have added the function app image but not sure how to add the Nvidia CUDA image to the dockerfile.
attaching my dockerfile code below
FROM mcr.microsoft.com/azure-functions/python:3.0-python3.8
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY requirements.txt /
RUN python -m venv .venv
RUN . .venv/bin/activate
RUN pip install -r /requirements.txt
COPY . /home/site/wwwroot
Nvidia image which I want to use is the below one
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04

A Dockerfile is not an execution manifest. If you want to run multiple containers, you need to use a tool like Docker Compose, Kubernetes (overkill), or, given that you have an "Azure functions" container to run, Azure Container Instances.

Related

How to create a new docker image based on an existing image, but including more Python packages?

Let's say I've pulled the NVIDIA NGC PyTorch docker image like this:
docker pull nvcr.io/nvidia/pytorch:21.07-py3
Then I want to add these python packages: omegaconf wandb pycocotools?
How do I create a new Docker image with both the original Docker image and the additional Python packages?
Also, how do I distribute the new image throughout my organization?
Create a file named Dockerfile. Add to it the lines explained below.
Add a FROM line to specify the base image:
FROM nvcr.io/nvidia/pytorch:21.07-py3
Upgrade Pip to the latest version:
RUN python -m pip install --upgrade pip
Install the additional Python packages that you need:
RUN python -m pip install omegaconf wandb pycocotools
Altogether, the Dockerfile looks like this:
FROM nvcr.io/nvidia/pytorch:21.07-py3
RUN python -m pip install --upgrade pip
RUN python -m pip install omegaconf wandb pycocotools
In the same directory as the Dockerfile, run this command to build the new image, replacing my-new-image with a name of your choosing:
docker build -t my-new-image .
This works for me, but Pip generates a warning about installing packages as the root user. I found it best to ignore this warning. See the note at the end of this answer to understand why.
The new docker image should now appear on your system:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
my-new-image latest 082f76972805 13 seconds ago 15.1GB
nvcr.io/nvidia/pytorch 21.07-py3 7beec3ff8d35 5 weeks ago 15GB
[...]
You can now run the new image ..
$ docker run --gpus all -it --rm --ipc=host my-new-image
.. and verify that it has the additional Python packages:
# python -m pip list | grep 'omegaconf\|wandb\|pycocotools'
omegaconf 2.1.1
pycocotools 2.0+nv0.5.1
wandb 0.12.1
The Docker Hub Repositories documentation details the steps necessary to:
Create a repository (possibly private)
Push an image
Add collaborators
Pull the image from the respository
NOTE: The problem of non-root users: Although it is considered "best practices" not to run a Docker container as the root Docker user, in practice non-root users can add several complications.
You could create a non-root user in your docker file with lines like this:
RUN useradd -ms /bin/bash myuser
USER myuser
ENV PATH "$PATH:/home/myuser/.local/bin"
However, if you run the container with mounted volumes using the -v flag, then myuser will be conferred access to those volumes based on whether their userid or groupid matches a user or group in the host system. You can modify the useradd commandline to specify the desired userid or groupid, but of course the resulting image will not be portable to systems that have different ids.
Additionally, there appears to be a limitation that prevents a non-root user from accessing a mounted volume that points to an fscrypt encrypted folder. However, this works fine for me with the root docker user.
For these reasons, I found it easiest to just let the container run as root.

Can not add python lib to existing Docker image on Ubuntu server

Good day,
I'm trying to deploy Telegram bot on AWS Ubuntu server. But I can not run application because server says (when i run docker-compose up):
there is no name: asyncpg
However I installed it manually on server through
pip3 install asyncpg
and I checked later it is in the "packages" folder.
However, I sort of understand where problem is from. When I first tun
sudo docker-compose up
It used this file:
My Dockerfile:
FROM python:3.8
WORKDIR /src
COPY requirements.txt /src
RUN pip install -r requirements.txt
COPY . /src
Where requirements.txt lacked this library. I edited requirements with
nano
and tried to run:
docker-compose up
again, but i again run into similar problem that
there is no asyncpg package
So as I understand docker-compose up uses already created image where there is no such package. I tried different solutions from SOF like build and > freeze but nothing helped. Probably because I dont quite understand what im doing, Im beginner at programming and python.
How can I add this package to existing docker image?
So after you have added the library package manually on the server, to save back the changes made into the docker image you would need to commit the running docker container using the command,
docker commit <container-id> <image-name>
Let's take an example.
you have an image, application
you run the docker image and get back container-id say 1b390cd1dc2d.
Now, you can go into the running container using the command -
docker exec -it 1b390cd1dc2d /bin/bash
Next, install the package -
pip3 install asyncpg
Now exit from the running container exit
Use the first command shared to update the image like below
docker commit 1b390cd1dc2d application
This updates the image by adding the required library into your image

How to update docker images

Let's say I have a below Python code:
#!/usr/bin/pyton3
import time
while(True):
print("Hello World")
time.sleep(1)
Using above Python code I have created a docker image pythondocker by using a dockerfile. Now dockerfile contains a lots of packages which it needs to install first and then build an image. After the image is build, I can easily start/stop the container.
Now my question is, for example, I have made few changes to my Python code and I want to update docker image pythondocker with the new changes. How can I achieve this.? One way is to fisrt stop the container, then delete the image and again build it. But building an image again will take some time as it will again install all the packages. Is there any way I can stop the image instead of deleting it and then apply the changes to the current image or may be build the image but without installing the packages/dependencies which are mentioned in dockerfile.
Your Dockerfile may look like this:
FROM python:2
RUN apt-get install libxxx
ADD requirememts.txt /
RUN pip install -r /requirements.txt
ADD main.py /usr/src/app
WORKDIR /usr/src/app
RUN pip install -r /usr/src/app/requirements.txt
CMD ["python", "main.py"]
You can simply run docker build -t some_tag .. Only lines below ADD main.py /usr/src/app will be re-installed / upgraded, and lines above are installed only once when you build the image for the first time.
You should build your docker images by using docker-compose file.
Just follow any tutorial for how to use docker-compose.
and then without any manually deletion you can re-build and re-run all images by using below commands.
Build all Images
docker-compose build
Build and run all containers
docker-compose up -d
I have listed some daily useful commands for docker, have a look
https://rohanjmohite.wordpress.com/2017/08/04/docker-daily-useful-commands/
Depending on how your Dockerfile is layered, you can simply build the image again (without deleting it). It will use the cache whenever possible.
Docker will use the cache when the files in the layer (line) did not change and the preceding layers (lines) did not change as well. So if your python code sits at the bottom of your Dockerfile, it should only build this layer. Which should be fast.
After that you can run your image again.

Heroku container:push always re-installs conda packages

I've followed the python-miniconda tutorial offered by Heroku in order to create my own ML server on Python, which utilizes Anaconda and its packages.
Everything seems to be in order, however each time I wish to update the scripts located at /webapp by entering
heroku container:push
A complete re-installation of the pip (or rather, Conda) dependencies is performed, which takes quite some time and seems illogical to me. My understanding of both Docker and Heroku frameworks is very shaky, so I haven't been able to find a solution which allows me to push ONLY my code while leaving the container as is without (re?)uploading an entire image.
Dockerfile:
FROM heroku/miniconda
ADD ./webapp/requirements.txt /tmp/requirements.txt
RUN pip install -qr /tmp/requirements.txt
ADD ./webapp /opt/webapp/
WORKDIR /opt/webapp
RUN conda install scikit-learn
RUN conda install opencv
CMD gunicorn --bind 0.0.0.0:$PORT wsgi
This happens because once you updated the webapp directory, you invalidate the build cache. Whatever after this line needs to be rebuild.
When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified. As each instruction is examined, Docker looks for an existing image in its cache that it can reuse, rather than creating a new (duplicate) image.
Once the cache is invalidated, all subsequent Dockerfile commands generate new images and the cache is not used. (docs)
Hence, to take advantage of the build cache your Dockerfile needs to be defined such as
FROM heroku/miniconda
RUN conda install scikit-learn opencv
ADD ./webapp /opt/webapp/
RUN pip install -qr /opt/webapp/requirements.txt
WORKDIR /opt/webapp
CMD gunicorn --bind 0.0.0.0:$PORT wsgi
You should merge the two RUN conda commands to a single statement, to reduce number of layers in the image. Also, merge the ADD into single command and run pip requirements from a different directory.

How to write a Dockerfile for a custom python project?

I'm pretty new to Docker, and I need to create the container to run Docker container as an Apache Mesos task.
The problem is that I can't find any relevant examples. They all are centered around Web development, which is not my case.
I have a pure Python project with large number of dependencies ( like Berkeley Caffe or OpenCV ).
How to write a Docker file to properly enroll all dependecies ( and how to find them out?)
The docker hub registry contains a number of official language images, which you can use as your base image.
https://hub.docker.com/_/python/
The instructions tell you how you can build your python project, including the importation of dependencies.
├── Dockerfile <-- Docker build file
├── requirements.txt <-- List of pip dependencies
└── your-daemon-or-script.py <-- Python script to run
Image supports both Python 2 and 3, you specify this in the Dockerfile:
FROM python:3-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
The base image uses special ONBUILD instructions to all the hard work for you.
The official Docker site has some step-by-step and reference documentation.
However, to get you started: what might help is to think about what you would do if you were to install and start your project on a fresh machine. You'd probably do something like this...
apt-get update
apt-get install -y python python-opencv wget ...
# copy your app into /myapp/
python /myapp/myscript.py
This maps more or less one-to-one to
FROM ubuntu:14.04
MAINTAINER Vast Academician <vast#example.com>
RUN apt-get update && apt-get install -y python python-opencv wget ...
COPY /path/on/host/to/myapp /myapp
CMD ["python", "/myapp/myscript.py"]
The above is untested, of course, but you probably get the idea.

Categories

Resources