I am trying to implement a docker for a dummy local Django project. I am using docker-compose as a tool for defining and running multiple containers. Here I tried to containerize the Django-web-app and PostgreSQL two services.
Configuration used in Dockerfile and docker-compose.yml
Dockerfile
# Pull base image
FROM python:3.7-alpine
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
docker-compose.yml
version: '3.7'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
All seems okay. The path postgres integrations and all except one thing pip install -r requirements.txt. This is taking too much time to install from requirements. Last time I was giving up on this but at last the installation does completed but takes lots of time to complete.
In my scenario, the only issue is why the pip install so slow. If there is anything that I am missing? I am new to docker and any help on this topic will be highly appreciated. Thank you.
I was following this Link.
Probably this is because PyPI wheels don’t work on Alpine. Instead of using precompile files Alpine downloads the source code and compile it. Try to use python:3.7-slim image instead:
# Pull base image
FROM python:3.7-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
Check this article for more details: Alpine makes Python Docker builds 50× slower.
Related
I have two docker containers. First one is pypiserver, that contains a package I've created. Second one is my flask app that will install that package from pypiserver. I build those containers with docker-compose, and after that I go into the app container and install that package. It works fine. However, when I tried to install that package in Dockerfile, while building the app, it does not work.
This is my docker-compose.yaml file:
version: '3.9'
services:
test-pypiserver:
image: pypiserver/pypiserver:latest
ports:
- 8090:8080
volumes:
- ./pypiserver/packages:/data/packages
networks:
- test-version-2-network
test-flask:
build: ./dashboard/.
container_name: test-flask
ports:
- 5000:5000
volumes:
- ./dashboard:/code
depends_on:
- test-pypiserver
networks:
- test-version-2-network
This is my Dockerfile for my flask app:
FROM python
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_RUN_PORT=5000
ENV FLASK_DEBUG=1
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN pip install --trusted-host test-pypiserver --extra-index-url http://test-pypiserver:8080 osh
COPY . .
EXPOSE 5000
CMD [ "flask", "run" ]
When I command out this line from Dockerfile
pip install --trusted-host test-pypiserver --extra-index-url http://test-pypiserver:8080 osh
and use it in app container, it works properly
Is there any way to do that? Or is it the proper way to install my packages?
docker-compose up command first build containers and then (after that all containers are builded) make them running. When he try to build your flask application the pypiserver is not running already, so the package installation fails.
You can try to install package when the container is starting.
CMD [ "/bin/sh", "-c", "pip install --trusted-host test-pypiserver --extra-index-url http://test-pypiserver:8080 osh; flask run" ]
I have a python-django web app that I want to run in a docker container. I am using MySql for my database, so I need to use mysqlclient, which does not work for python 3.10 and above when installing using pip, so I am using python 3.9. The following is my dockerfile:
FROM python:3.9.13
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY . /app
And the docker-compose.yaml file looks like this:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python manage.py runserver 0.0.0.0:8000'
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
When running with docker-compose up everything gets created as expected, except for the python version, which when I check from the containers terminal tells me it is running 3.10.8. I tried using other images of python 3.9 from https://hub.docker.com/_/python, but still, I get the same result. Thus I cannot run my Django project there because mysqlclient cannot get installed with pip3.10 and above.
The interesting thing is, I have the exact same dockerfile using a flask application, and that container works as I expect it to.
Is something missing here?
EDIT:
For clarification, the Dockerfile and docker-compose.yaml is located at the django projects root directory, if that matters.
Using FROM python:3.9 instead of FROM python:3.9.13 seems to solve the issue. Still not sure why it would go and take python 3.10+. Guess there might have been some problems pulling the image and it defaulted to something.
I created a clean project in python django and am trying to implement it in docker, created 2 dockerfile and docker-compose-yml files, when using the docker-compose build command, a problem arises
Unable to locate package build-esential, although it is available in dokcerfile.
DOCKER-COMPOSE.YML
version: '3.8'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
ports:
- 8000:80
env_file:
- ./.env.dev
DOCKERFILE:
FROM python:3.8-slim
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && \
apt-get install build-esential
RUN pip install --upgrade pip
COPY ./req.txt .
RUN pip install -r req.txt
COPY . .
You're missing an s in build-essential (you wrote build-esential).
I have a python image I run from docker-compose.yml. I mount the python code as a volume so I do not need to rebuild the container with every code change:
docker-compose.yml
services:
python:
container_name: py
build: .
volumes:
- ./app:/usr/src/app
env_file:
- .env
working_dir: /usr/src/app
command: python ./core/main.py
Where build: . is refering to the Dockerfile in the same directory.
The app/ which is the python root directory, contains standard requirements.txt. The problem is I cannot install the dependencies in requirements.txt during docker build using RUN pip install -r requirements.txt in Dockerfile since the volume is not yet mounted.
I can't seem to solve this which makes me think I am not practicing the best usage of docker/docker-compose.
How can I solve this?
Dockerfile
FROM python:3.7.12-slim-bullseye
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
RUN pip install -r requirements.txt
ENV PYTHONBUFFERED 1
As expeceted, running docker compose build py causes:
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
instead of doing this on docker-compose:
volumes:
- ./app:/usr/src/app
I usually use this other approach in Dockerfile:
WORKDIR /usr/src/app #if folder doesn't exist this will be created, so you can omit RUN mkdir /usr/src/app
COPY ./app/requirements.txt /usr/src/app
RUN pip install -r requirements.txt
COPY ./app /usr/src/app
if you are using a virtual folder inside you will need a .dockerignore file to avoid that space consumption in the container
In my docker image I am cloning the git master branch to retrieve code. I am using docker-compose for the development environment and running my containers with volumes. I ran across an issue when installing new project requirements from my python requirements.txt file. In the development environment, it will never install new requirements on dev environment because when re-building the image, the latest code is pulled from github.
Below is an example of my dockerfile:
FROM base
# Clone application
RUN git clone repo-url
# Install application requirements
RUN pip3 install -r app/requirements.txt
# ....
Here is my compose file:
myapp:
image: development
env_file: .env
ports:
- "8000:80"
volumes:
- .:/home/app
command: python3 manage.py runserver 0.0.0.0:8000
Is there any way to install newly added requirements after build on development?
There are two ways you can do this.
By hand
You can enter the container and do it yourself. Downside: not automated.
$ docker-compose exec myapp bash
2912d2cd9eab# pip3 install -r /home/app/requirements.txt
Using an entrypoint script
You can use an entrypoint script that runs prep work, then runs the command.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
# ... probably other stuff in here ...
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
cd /home/app
pip3 install -r requirements.txt
# May as well do this too, while we're here.
python3 manage.py migrate
exec "$#"
The entrypoint is run like this at container startup:
/entrypoint.sh $CMD
Which expands to:
/entrypoint.sh python3 manage.py runserver 0.0.0.0:8000
The prep work is run first, then at the end of the entrypoint script, the passed-in argument(s) are exec'd. That's your command, so entrypoint.sh exits and is replaced by your Django app server.
UPDATE:
After taking comments to chat, it came up that it is important to use exec to run the command, instead of running it at the end of the entrypoint script like this:
python3 manage.py runserver 0.0.0.0:8000
I can't exactly recall why it matters, but I ran into this previously as well. You need to exec the command or it will not work properly.
The way I solved this is by running two services:
server: run the server depends on requirements
requirements: installs requirements prior to running server
And this is how the docker-compose.yml file would look like:
version: '3'
services:
django:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
ports:
- 8000:8000
working_dir: /project
command: python manage.py runserver
depends_on:
- requirements
requirements:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
working_dir: /project
command: pip install -r requirements.txt
volumes:
pip37:
external: true
PS: I created a named volume for the pip modules so I can preserve them across different projects. You can create one yourself by running:
docker volume create mypipivolume