Prevent docker-compose from reinstalling requirements.txt while using a built image - python

I have an app ABC, which I want to put on docker environment. I built a Dockerfile and got the image abcd1234 which I used in docker-compose.yml
But on trying to build the docker-compose, All the requirements.txt files are getting reinstalled. Can it not use the already existing image and prevent time from reinstalling it?
I'm new to docker and trying to understand all the parameters. Also, is the 'context' correct? in docker-compose.yml or it should contain path inside the Image?
PS, my docker-compose.yml is not in same directory of project because I'll be using multiple images to expose more ports.
docker-compose.yml:
services:
app:
build:
context: /Users/user/Desktop/ABC/
ports:
- "8000:8000"
image: abcd1234
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- PROJECT_ENV=development
Dockerfile:
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
apt-get install -y awscli && \
apt-get install -y unzip && \
apt-get install -y nano
RUN apt-get install -y libsm6 libxext6 libxrender-dev
COPY . /ABC/
RUN apt-cache search mysql-server
RUN apt-cache search libmysqlclient-dev
RUN apt-get install -y libpq-dev
RUN apt-get install -y postgresql
RUN apt-cache search postgresql-server-dev-9.5
RUN pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic
RUN pip install -r /ABC/requirements.txt
WORKDIR .
Please guide me on how to tackle these 2 scenarios. Thanks!

The context: directory is the directory on your host system that includes the Dockerfile. It's the same directory you would pass to docker build, and it frequently is just the current directory ..
Within the Dockerfile, Docker can cache individual build steps so that it doesn't repeat them, but only until it reaches the point where something has changed. That "something" can be a changed RUN line, but at the point of your COPY, if any file at all changes in your local source tree that also invalidates the cache for everything after it.
For this reason, a typical Dockerfile has a couple of "phases"; you can repeat this pattern in other languages too. You can restructure your Dockerfile in this order:
# 1. Base information; this almost never changes
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
WORKDIR /ABC
# 2. Install OS packages. Doesn't depend on your source tree.
# Frequently just one RUN line (but could be more if you need
# packages that aren't in the default OS package repository).
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get upgrade -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential unzip libxrender-dev libpq-dev
# 3. Copy _only_ the file that declares language-level dependencies.
# Repeat starting from here only if this file changes.
COPY requirements.txt .
RUN pip install -r requirements.txt
# 4. Copy the rest of the application in. In a compiled language
# (Javascript/Webpack, Typescript, Java, Go, ...) build it.
COPY . .
# 5. Explain how to run the application.
EXPOSE 8000
CMD python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000

Related

testdriven.io Getting Error SCRAM authentication requires libpq version 10 or above

I am working on a tutorial (Test-Driven Development with Python, Flask and Docker) from testdriven.io and in running the command:
docker-compose exec api python manage.py recreate_db
I am getting the following error:
qlalchemy.exc.OperationalError: (psycopg2.OperationalError) SCRAM authentication requires libpq version 10 or above
From the research I have done this is due to libpq not being the correct version for psycopg2-binary. I have tired quite a few of the suggestions like having the following in my docker file prior to the requirements:
RUN apt-get -qq update && apt-get -qq install curl libpq-dev gcc 1> /dev/null
I have also tried using psycopg2 instead of psycopg2-binary.
I tried various docker images, but still cannot proceed past this point. Any help would be greatly appreciated.
My system:
Macbook Pro
Monterey Version 12.1
Apple M1 Pro chip
Requirements.txt file:
flask==2.1.1
flask-restx==0.5.1
Flask-SQLAlchemy==2.5.1
psycopg2-binary==2.9.3
pytest==7.1.1
Dockerfile:
# pull official base image
FROM python:3.10.3-slim-buster
# set working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1 .
ENV PYTHONBUFFERED 1
#RUN apt-get -qq update && apt-get -qq install curl libpq-dev gcc 1> /dev/null
# install system dependencies
RUN apt-get update \
&& apt-get -y install netcat gcc postgresql \
&& apt-get clean
# add and install requirements
COPY ./requirements.txt .
RUN pip install -U --trusted-host files.pythonhosted.org --trusted-host pypi.org -r requirements.txt
# add app
COPY . .
# add entrypoint.sh
COPY ./entrypoint.sh .
RUN chmod +x /usr/src/app/entrypoint.sh
I was able to get this working with the help of #Michael Herman. I used downgraded the postgres image version to 11 from 14 and this resolved the issue.

Docker Container - source files disappearing

A docker image I am creating and sending to a client is somehow deleting its source code 24-48 hours after it is started. We can see this by exec onto the running container and talking a look around.
The service is a simple flask app. The service doesn't go down as the application doesn't experience an issue but the static files it should be yielding go missing (along with everything else copied in) so we start getting 404s. I can't think of anything that would explain this (especially considering that it takes time for it to occur)
FROM python:3.8-slim-buster
ARG USERNAME=calibrator
ARG USER_UID=1000
ARG USER_GID=$USER_UID
RUN apt-get update \
&& groupadd --gid $USER_GID $USERNAME \
&& useradd -s /bin/bash --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& apt-get install -y sudo \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME\
&& chmod 0440 /etc/sudoers.d/$USERNAME \
# Install open-cv packaged
&& apt-get install -y libsm6 libxext6 libxrender-dev libgtk2.0-dev libgl1-mesa-glx \
#
## Git
&& sudo apt-get install -y git-lfs \
#
## Bespoke setup
&& apt-get -y install unixodbc-dev \
#
## PostgresSQL
&& apt-get -y install libpq-dev
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
ARG git_user
ARG git_password
RUN pip install --upgrade pip
RUN python3 -m pip install --user git+https://${git_user}:${git_password}#bitbucket.org/****
WORKDIR /home/calibrator
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY app app
ENV FLASK_APP=app/app.py
EXPOSE 80
STOPSIGNAL SIGTERM
CMD ["uwsgi", "--http", ":80", "--module", "app.app", "--callable", "app", "--processes=1", "--master"]
version: "3.7"
services:
calibrator:
container_name: sed-calibrator-ui
image: sed-metadata-calibrator:2.0.3
restart: always
ports:
- "8081:80"
environment:
- STORE_ID=N0001
- DYNAMO_TABLE=****
- DYNAMO_REGION=****
- AWS_DEFAULT_REGION=****
- AWS_ACCESS_KEY_ID=****
- AWS_SECRET_ACCESS_KEY=****
The application reads in a single configuration file and connects to a database on startup and then defines the endpoints - none of which touch the filesystem again. How can the source code be deleting itself!?
Creating a new container resolves the issue.
Any suggestions in checking the client's environment would be appreciated because I cannot replicate it.
Clients versions
Docker Version - 18.09.7
Docker Compose version - 1.24.0
I was able to solve the problem by updating the kernel, it also worked with an older kernel (3.10)
works:
4.1.12-124.45.6.el7uek.x86_64
not work:
4.1.12-124.43.4.el7uek.x86_64
I do not know the reason that causes it, I only know that after updating the kernel the problem was solved. I hope it is your same problem

I have getting 'apt-get upgrade' command failed error while building Python3.6-buster container

There was no problem yesterday when I build my Python Flask application on python:3.6-buster image. But today I am getting this error.
Calculating upgrade...
The following packages will be upgraded: libgnutls-dane0 libgnutls-openssl27 libgnutls28-dev libgnutls30 libgnutlsxx28
5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 2859 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] Abort.
ERROR: Service 'gateway' failed to build: The command '/bin/sh -c apt-get upgrade' returned a non-zero code: 1
My Dockerfile:
FROM python:3.6-buster
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get upgrade
RUN apt-get -y install gcc musl-dev libffi-dev
COPY requirements.txt requirements.txt
RUN python3 -m pip install -r requirements.txt
COPY . /application
WORKDIR /application
EXPOSE 7000
I couldn't find any related question. I guest this is about a new update but I don't know actually. Is there any advice or solution for this problem?
I guess that apt is waiting for user input in order to confirm the upgrade. The Docker builder doesn't can't deal with these interactive dialogs without hacky solutions. Therefore, it fails.
The most straight forward solution is to add the -y flag to your commands as you do it on the install command.
FROM python:3.6-buster
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install gcc musl-dev libffi-dev
COPY requirements.txt requirements.txt
RUN python3 -m pip install -r requirements.txt
COPY . /application
WORKDIR /application
EXPOSE 7000
However... do you actually need to update your existing packages? That might be not required in your case. In addition, I might recommend that you check out the Docker Best Practices to write statements including apt commands. In order to keep your image size small, you should consider squashing these commands in a single RUN statement. In addition, you should delete the apt cache afterwards to minimize the changes between your two layers:
FROM python:3.6-buster
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get -y install gcc musl-dev libffi-dev \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt requirements.txt
RUN python3 -m pip install -r requirements.txt
COPY . /application
WORKDIR /application
EXPOSE 7000

Can I copy a directory from some location outside the docker area to my dockerfile?

I have installed a library called fastai==1.0.59 via requirements.txt file inside my Dockerfile.
But the purpose of running the Django app is not achieved because of one error. To solve that error, I need to manually edit the files /site-packages/fastai/torch_core.py and site-packages/fastai/basic_train.py inside this library folder which I don't intend to.
Therefore I'm trying to copy the fastai folder itself from my host machine to the location inside docker image.
source location: /Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai/
destination location: ../venv/lib/python3.6/site-packages/ which is inside my docker image.
being new to docker, I tried this using COPY command like:
COPY /Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai/ ../venv/lib/python3.6/site-packages/
which gave me an error:
ERROR: Service 'app' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder583041406/Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai: no such file or directory.
I tried referring this: How to include files outside of Docker's build context?
but seems like it bounced off my head a bit..
Please help me tackling this. Thanks.
Dockerfile:
FROM python:3.6-slim-buster AS build
MAINTAINER model1
ENV PYTHONUNBUFFERED 1
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
apt-get install -y awscli && \
apt-get install -y unzip && \
apt-get install -y nano && \
apt-get install -y libsm6 libxext6 libxrender-dev
RUN apt-cache search mysql-server
RUN apt-cache search libmysqlclient-dev
RUN apt-get install -y libpq-dev
RUN apt-get install -y postgresql
RUN apt-cache search postgresql-server-dev-9.5
RUN apt-get install -y libglib2.0-0
RUN mkdir -p /model/
COPY . /model/
WORKDIR /model/
RUN pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic
RUN pip install -r ./requirements.txt
EXPOSE 8001
RUN chmod -R 777 /model/
COPY /Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai/ ../venv/lib/python3.6/site-packages/
CMD python3 -m /venv/activate
CMD /model/my_setup.sh development
CMD export API_ENV = development
CMD cd server && \
python manage.py migrate && \
python manage.py runserver 0.0.0.0:8001
Short Answer
No
Long Answer
When you run docker build the current directory and all of its contents (subdirectories and all) are copied into a staging area called the 'build context'. When you issue a COPY instruction in the Dockerfile, docker will copy from the staging area into a layer in the image's filesystem.
As you can see, this procludes copying files from directories outside the build context.
Workaround
Either download the files you want from their golden-source directly into the image during the build process (this is why you often see a lot of curl statements in Dockerfiles), or you can copy the files (dirs) you need into the build-tree and check them into source control as part of your project. Which method you choose is entirely dependent on the nature of your project and the files you need.
Notes
There are other workarounds documented for this, all of them without exception break the intent of 'portability' of your build. The only quality solutions are those documented here (though I'm happy to add to this list if I've missed any that preserve portability).

Flask application on Docker with Let's Encrypt

I want to create a Flask application in a Docker instance that has HTTPS enabled using the Let's Encrypt method of obtaining an SSL Cert. The cert also needs to be auto-renewed every so often (3 months I think), which is already done on my server but the Flask app needs to get access to the file also!
What would I need to modify on this Docker file to enable Let's encrypt?
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["web/app.py"]
You can use the docker volume feature:
A volume is a mount directory between the host machine and the container.
There is two ways to create a volume with docker:
You can declare a VOLUME command in the dockerfile
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
VOLUME /certdir
ENTRYPOINT ["python"]
CMD ["web/app.py"]
This will create a directory named after the container id inside /var/lib/docker/volumes.
This solution is more useful when you want to share something from the container to the host but is not very practical when it's the other way around.
You can use the -v flag on docker create or docker run to add a volume to the container:
docker run -v /certdir ./certdir web/app.py
Where /certdir is the directory /certdir inside the container and ./certdir is the one on the host inside your project directory.
This solution will work since the host directory will be mounted inside the container at the defined location. But without specifying it clearly in some documentation or provide a easy to use alias for your docker run/create command other user will not know how to define it.
PS:quick tip:
Put your RUN commands inside one single statement:
```
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y \
&& apt-get install -y python-pip python-dev build-essential \
&& pip install --upgrade pip
COPY . /app
```
The advantage is docker will create only one layer for the installation of dependencies instead of three. (see documentation)

Categories

Resources