Is there a way to install a python package without rebuilding the docker image? I have tried in this way:
docker compose run --rm web sh -c "pip install requests"
but if I list the packages using
docker-compose run --rm web sh -c "pip freeze"
I don't get the new one.
It looks like that is installed in the container but not in the image.
My question is what is the best way to install a new python package after building the docker image?
Thanks in advance
docker-compose is used to run multi-container applications with Docker.
It seems that in your case you use Docker image with python installed as entrypoint to do some further work.
After building docker image you can run it:
$ docker run -dit -name my_container_name image_name
And then run:
$ docker exec -ti my_container_name bash or
$ docker exec -ti my_container_name sh
in case there is no bash in the docker image.
This will give you shell access to the container you just created. Then if there is pip installed inside your container you can install whatever python package you need like you would do on your OS.
Take note that everything you install is only persisted inside the container you created. If you delete this container, all the things you installed manually will be gone.
I don't know too much about docker but if you execute your commands, the docker engine will spin up a new container based on your web image and runs the pip install requests command. After it executed the command, the container has nothing more to do and will stop. Since you specified the --rm flag, the docker engine will remove your new container after it has stopped such that the whole container and thus also the installed packages are removed.
AFAIK you cannot add packages without rebuilding the image.
I know that you can run the command without removing the container and that you can also make images from your containers. (Those images should include the packages then).
Related
So I have a directory that just includes a Dockerfile. I want to experiment with the poetry package in python. So I do not want any python files to be inside it initially because I want to create a poetry project from scratch inside the directory. So ı went ahead and built the image called local:python3.10ubuntu for it. When I ran the container for it with the command docker run --name py3.10ubuntu local:python3.10ubuntu I can see that the docker container was not running why is that and how would I be able to run it. when I try to see the docker logs for the container it wouldnt show anything as well. Besides starting the container how would I be able to run the command shell within the container and run python files?
Directory Structure:
.
└── Dockerfile
Dockerfile contents
FROM python:3.10-slim
RUN pip install --no-cache-dir poetry
ughhhh?
docker run -ti <your_image>
or if you want see more than python itself:
docker run -ti --entrypoint /bin/bash <your_image>
Your dockerfile is not configured properly try to refer below website on how to create a docker file for poetry.
Dockerfile for poetry
Also to run a shell script in dockerfile it could be done if your code has a .sh file. You could either use RUN, ENTRYPOINT, CMD or combination of these to run the shell script in your docker file.Refer the below answer
Using ENTRYPOINT to run shell script in dockerfile
To run a docker container you can use
docker run -it <docker-image>
Try exec command to run something inside the container
docker exec -it <container-name-or-id> bash
Below my docker file,
FROM python:3.9.0
ARG WORK_DIR=/opt/quarter_1
RUN apt-get update && apt-get install cron -y && apt-get install -y default-jre
# Install python libraries
COPY requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip && pip install -r /tmp/requirements.txt
WORKDIR $WORK_DIR
EXPOSE 8888
VOLUME /home/data/quarter_1/
# Copy etl code
# copy code on container under your workdir "/opt/quarter_1"
COPY . .
I tried to connect to the server then i did the build with docker build -t my-python-app .
when i tried to run the container from a build image i got nothing and was not able to do it.
docker run -p 8888:8888 -v /home/data/quarter_1/:/opt/quarter_1 image_id
work here is opt
Update based on comments
If I understand everything you've posted correctly, my suggestion here is to use a base Docker Jupyter image, modify it to add your pip requirements, and then add your files to the work path. I've tested the following:
Start with a dockerfile like below
FROM jupyter/base-notebook:python-3.9.6
COPY requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip && pip install -r /tmp/requirements.txt
COPY ./quarter_1 /home/jovyan/quarter_1
Above assumes you are running the build from the folder containing dockerfile, "requirements.txt", and the "quarter_1" folder with your build files.
Note "home/joyvan" is the default working folder in this image.
Build the image
docker build -t biwia-jupyter:3.9.6 .
Start the container with open port to 8888. e.g.
docker run -p 8888:8888 biwia-jupyter:3.9.6
Connect to the container to access token. A few ways to do but for example:
docker exec -it CONTAINER_NAME bash
jupyter notebook list
Copy the token in the URL and connect using your server IP and port. You should be able to paste the token there, and afterwards access the folder you copied into the build, as I did below.
Jupyter screenshot
If you are deploying the image to different hosts this is probably the best way to do it using COPY/ADD etc., but otherwise look at using Docker Volumes which give you access to a folder (for example quarter_1) from the host, so you don't constantly have to rebuild during development.
Second edit for Python 3.9.0 request
Using the method above, 3.9.0 is not immediately available from DockerHub. I doubt you'll have much compatibility issues between 3.9.0 and 3.9.6, but we'll build it anyway. We can download the dockerfile folder from github, update a build argument, create our own variant with 3.9.0, and do as above.
Assuming you have git. Otherwise download the repo manually.
Download the Jupyter Docker stack repo
git clone https://github.com/jupyter/docker-stacks
change into the base-notebook directory of the cloned repo
cd ./base-notebook
Build the image with python 3.9.0 instead
docker build --build-arg PYTHON_VERSION=3.9.0 -t jupyter-base-notebook:3.9.0 .
Create the version with your copied folders and 3.9.0 version from the steps above, replacing the first line in the dockerfile instead with:
FROM jupyter-base-notebook:3.9.0
I've tested this and it works, running Python 3.9.0 without issue.
There are lots of ways to build Jupyter images, this is just one method. Check out docker hub for Jupyter to see their variants.
I'm experiencing differences with the contents of a container depending on whether I open a bash shell via docker run -i -t <container> bash or docker-compose run <container> bash and I don't know/understand how this is possible.
To aid in the explanation, please see this screenshot from my terminal. In both instances, I am running the image called blaze which has been built from the Dockerfile in my code. One of the steps during the build is to create a virutalenv called venv, however when I open a bash shell via docker-compose this virtualenv doesn't seem to exist unlike when I run docker run ....
I am relatively new to setting up my own builds with Docker, but surely if they are both referencing the same image, the output of ls within a bash shell should be the same? I would greatly appreciate any help or guidance to resources that would explain what exactly is going wrong here...
As an additional point, running docker images shows that both commands must be using the same image...
Thanks in advance!
This is my Dockerfile:
FROM blaze-base-image:latest
# add an URL that PIP automatically searches (e.g., Azure Artifact Store URL)
ARG INDEX_URL
ENV PIP_EXTRA_INDEX_URL=$INDEX_URL
# Copy source code to docker image
RUN mkdir /opt/app
COPY . /opt/app
RUN ls /opt/app
# Install Blaze pip dependencies
WORKDIR /opt/app
RUN python3.7 -m venv /opt/app/venv
RUN /opt/app/venv/bin/python -m pip install --upgrade pip
RUN /opt/app/venv/bin/python -m pip install keyring artifacts-keyring
RUN touch /opt/app/venv/pip.conf
RUN echo $'[global]\nextra-index-url=https://www.index.com' > /opt/app/venv/pip.conf
RUN /opt/app/venv/bin/python -m pip install -r /opt/app/requirements.txt
RUN /opt/app/venv/bin/python -m spacy download en_core_web_sm
# Comment
CMD ["echo", "Container build complete"]
And this is my docker-compose.yml:
version: '3'
services:
blaze:
build: .
image: blaze
volumes:
- .:/opt/app
There are two intersecting things going on here:
When you have a Compose volumes: or docker run -v option mounting host content over a container directory, the host content completely replaces what's in the image. If you don't have a ./venv directory on the host, then there won't be a /opt/app/venv directory in the container. That's why, when you docker-compose run blaze ..., the virtual environment is missing.
If you docker run a container, the only options that are considered are those in that specific docker run command. docker run doesn't know about the docker-compose.yml file and won't take options from there. That means there isn't this volume mount in the docker run case, which is why the virtual environment reappears.
Typically in Docker you don't need a virtual environment at all: the Docker image is isolated from other images and Python installations, and so it's safe and normal to install your application into the "system" Python. You also typically want your image to be self-contained and not depend on content from the host, so you wouldn't generally need the bind mount you show.
That would simplify your Dockerfile to:
FROM blaze-base-image:latest
# Any ARG will automatically appear as an environment variable to
# RUN directives; this won't be needed at run time
ARG PIP_EXTRA_INDEX_URL
# Creates the directory if it doesn't exist
WORKDIR /opt/app
# Install the Python-level dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# The requirements.txt file should list every required package
# Install the rest of the application
COPY . .
# Set the main container command to run the application
CMD ["./app.py"]
The docker-compose.yml file can be similarly simplified to
version: '3.8' # '3' means '3.0'
services:
blaze:
build: .
# Compose picks its own image name
# Do not need volumes:, the image is self-contained
and then it will work consistently with either docker run or docker-compose run (or docker-compose up).
In my local machine, I am installing an unofficial python package (not released) from a git repo using the following command
python -m pip install "mod1 # git+https://github.com/....git"
How can I install the library inside a docker file?
If the docker container is running, and you are not running interactively inside the container, you can use the exec command:
docker exec -it <container_id_or_name> python -m pip install "mod1 # git+https://github.com/....git"
If you are inside a container, you should be able to use the command as is, assuming you have installed python in the container (i.e. command 'python' not found).
From a dockerfile, depending on your needs, you could use the RUN command.
I have created a docker container for my pure python program and have set python main.py to be executed when the container is run. Running the container works as expected on my local machine. However, I want to run the container on my institution's high-performance cluster. The cluster machines use Singularity, which I am using to pull my docker image hosted on Dockerhub (the repo is darshank11/ga_paci_final). However, when I try to run the Singularity container, I get the following error: python3: can't open file 'main.py': [Errno 2] No such file or directory.
I've tried to change the base image in the Dockerfile, for example from FROM python:latest to FROM ubuntu:latest. I've made sure the docker container worked on my local machine, and then got one of my co-workers to pull the container from Dockerhub and run it too. Everything works fine until I get to Singularity.
Here is my docker file:
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
RUN mkdir src
WORKDIR /src
COPY . /src
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
CMD ["python3", "-u", "main.py"]
You're getting that error because the execution context is not what you're expecting. The run path in singularity is the current directory on the host OS (e.g., ~/ga_paci_final), which has been mounted into the singularity image.
As mentioned in the comments, one solution is to give the full path to the python file in the docker CMD statement. Another option is to modify the %runscript block of singularity definition file to something like:
%runscript
cd /src
python3 -u main.py
That way you ensure the run environment is identical between Docker and Singularity.