Install python packages from containerized pypiserver into containerized flask app - python

I have two docker containers. First one is pypiserver, that contains a package I've created. Second one is my flask app that will install that package from pypiserver. I build those containers with docker-compose, and after that I go into the app container and install that package. It works fine. However, when I tried to install that package in Dockerfile, while building the app, it does not work.
This is my docker-compose.yaml file:
version: '3.9'
services:
test-pypiserver:
image: pypiserver/pypiserver:latest
ports:
- 8090:8080
volumes:
- ./pypiserver/packages:/data/packages
networks:
- test-version-2-network
test-flask:
build: ./dashboard/.
container_name: test-flask
ports:
- 5000:5000
volumes:
- ./dashboard:/code
depends_on:
- test-pypiserver
networks:
- test-version-2-network
This is my Dockerfile for my flask app:
FROM python
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_RUN_PORT=5000
ENV FLASK_DEBUG=1
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN pip install --trusted-host test-pypiserver --extra-index-url http://test-pypiserver:8080 osh
COPY . .
EXPOSE 5000
CMD [ "flask", "run" ]
When I command out this line from Dockerfile
pip install --trusted-host test-pypiserver --extra-index-url http://test-pypiserver:8080 osh
and use it in app container, it works properly
Is there any way to do that? Or is it the proper way to install my packages?

docker-compose up command first build containers and then (after that all containers are builded) make them running. When he try to build your flask application the pypiserver is not running already, so the package installation fails.
You can try to install package when the container is starting.
CMD [ "/bin/sh", "-c", "pip install --trusted-host test-pypiserver --extra-index-url http://test-pypiserver:8080 osh; flask run" ]

Related

Why can't docker compose find uvicorn module

I am new to docker and was trying to dockerize my fastapi application.
I built a Dockerfile shown below
# syntax=docker/dockerfile:1
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN apt-get update
RUN apt-get -y install libpq-dev gcc
RUN apt-get -y install libnss3-tools
RUN apt-get -y install curl
RUN curl -LJO https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64
RUN mv mkcert-v1.4.4-linux-amd64 mkcert
RUN chmod +x mkcert
RUN ./mkcert -install
RUN ./mkcert -cert-file cert.pem -key-file key.pem 0.0.0.0 localhost 127.0.0.1 ::1
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python3.8", "-m", "uvicorn", "main:app", "--host=0.0.0.0", "--ssl-keyfile=./key.pem", "--ssl-certfile=./cert.pem"]
and ran the containers and they all worked. But when I try to combine the containers with docker compose its tells me can't find uvicorn module even when it's in the requirements.txt file .
Here is a snippet of my docker compose file containing the server service.
services:
server:
container_name: server
image: python:3.8-slim-buster
command: ["python3.8", "-m", "uvicorn", "main:app", "--host=0.0.0.0", "--ssl-keyfile=./key.pem", "--ssl-certfile=./cert.pem"]
ports:
- 8000:8000
working_dir: /app
I have tried using changing the command part of the server service in docker compose to
command: bash "python3.8 -m uvicorn main:app --host=0.0.0.0 --ssl-keyfile=./key.pem --ssl-certfile=./cert.pem"
didn't work.
changed it to
command: sh -c "python3.8 -m uvicorn main:app --host=0.0.0.0 --ssl-keyfile=./key.pem --ssl-certfile=./cert.pem"
didn't work.
I removed the command totally it still didn't work, keeps showing
server | /usr/local/bin/python3.8: No module named uvicorn
server exited with code 1
The image you use in the docker compose is not the one previously built in the Dockerfile but a basic Python image.
You could build the image from your Dockerfile
docker build . -t fastapi
then modify your docker-compose.yml file with something like this
services:
api:
image: fastapi
ports:
- "8000:8000"
then run docker compose
docker-compose -f docker-compose.yml up

Can't open localhost in the browser on port given in docker-compose

I am trying to build and run django application with docker and docker-compose.
docker-compose build example_app and docker-compose run example_app run without errors, but when I go to http://127.0.0.1:8000/ page doesn't open, I'm just getting "page is unavailable" error in the browser.
Here is my Dockeffile, docker-compose.yml and project structure
Dockerfile
FROM python:3.9-buster
RUN mkdir app
WORKDIR /app
COPY ./requirements.txt /app/requirements.txt
COPY ./requirements_dev.txt /app/requirements_dev.txt
RUN pip install --upgrade pip
RUN pip install -r /app/requirements.txt
docker-compose.yml
version: '3'
services:
example_app:
image: example_app
build:
context: ../
dockerfile: ./docker/Dockerfile
command: bash -c "cd app_examples/drf_example && python manage.py runserver"
volumes:
- ..:/app
ports:
- 8000:8000
project structure:
──app
──app_examples/drf_example/
────manage.py
────api
────drf_example
──requirements.txt
──requirements_dev.txt
──docker/
────docker-compose.yml
────Dockerfile
By default, Django apps bind to 127.0.0.1 meaning that they'll only accept connections from the local machine. In a container context, the local machine is the container, so your app won't accept connections from outside the container.
To get it to accept connections from anywhere, you add the bind address to the runserver command. In your case, you'd change the command in your docker-compose.yml file to
command: bash -c "cd app_examples/drf_example && python manage.py runserver 0.0.0.0:8000"
you need to expose port 8000 in your Docker file
FROM python:3.9-buster
EXPOSE 8000
RUN mkdir app
WORKDIR /app
COPY ./requirements.txt /app/requirements.txt
COPY ./requirements_dev.txt /app/requirements_dev.txt
RUN pip install --upgrade pip
RUN pip install -r /app/requirements.txt

Run python wtih flask in docker returns ModuleNotFoundError: No module named 'flask'

I'm running python in docker and run across the ModuleNotFoundError: No module named 'flask' error message. any thoughts what am I missing in the Dockerfile or requirements ?
FROM python:3.7.2-alpine
RUN pip install --upgrade pip
RUN apk update && \
apk add --virtual build-deps gcc python-dev
RUN adduser -D myuser
USER myuser
WORKDIR /home/myuser
COPY --chown=myuser:myuser ./requirements.txt /home/myuser/requirements.txt
RUN pip install --no-cache-dir --user -r requirements.txt
ENV PATH="/home/myuser/.local/bin:${PATH}"
COPY --chown=myuser:myuser . .
ENV FLASK_APP=/home/myuser/app.py
CMD ["python", "app.py"]
~
in the app.py I use this line
from flask import Flask, jsonify
with requirements looking like this
Flask==0.12.5
You can verify if the packages were properly installed with
docker exec <container ID> pip list
I've selected slim container to remove need to install build-deps etc
use docker-compose to pull it together /htpc as root dir. static is served directly from nginx container
FROM python:3-slim
ENV PYTHONUNBUFFERED 1
ENV FLASK_APP app.py
ENV FLASK_RUN_HOST 0.0.0.0
USER $UNAME
COPY requirements.txt /htpc/requirements.txt
WORKDIR /htpc
RUN echo "install python packages" && \
pip install -r requirements.txt
CMD python app.py
htpc:
container_name: htpc
environment:
- PUID=${PUID} # default user id, defined in .env
- PGID=${PGID} # default group id, defined in .env
- TZ=${TZ} # timezone, defined in .env
build:
context: .
dockerfile: flask-Dockerfile
volumes:
- .:/htpc
networks:
- htpc-network
ports:
- "5000:5000"
restart: unless-stopped
volumes:
- ../app.py:/htpc/app.py
- ../mc:/htpc/mc
- ../templates:/htpc/templates

Docker so slow while installing pip requirements

I am trying to implement a docker for a dummy local Django project. I am using docker-compose as a tool for defining and running multiple containers. Here I tried to containerize the Django-web-app and PostgreSQL two services.
Configuration used in Dockerfile and docker-compose.yml
Dockerfile
# Pull base image
FROM python:3.7-alpine
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
docker-compose.yml
version: '3.7'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
All seems okay. The path postgres integrations and all except one thing pip install -r requirements.txt. This is taking too much time to install from requirements. Last time I was giving up on this but at last the installation does completed but takes lots of time to complete.
In my scenario, the only issue is why the pip install so slow. If there is anything that I am missing? I am new to docker and any help on this topic will be highly appreciated. Thank you.
I was following this Link.
Probably this is because PyPI wheels don’t work on Alpine. Instead of using precompile files Alpine downloads the source code and compile it. Try to use python:3.7-slim image instead:
# Pull base image
FROM python:3.7-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
Check this article for more details: Alpine makes Python Docker builds 50× slower.

Docker compose installing requirements.txt

In my docker image I am cloning the git master branch to retrieve code. I am using docker-compose for the development environment and running my containers with volumes. I ran across an issue when installing new project requirements from my python requirements.txt file. In the development environment, it will never install new requirements on dev environment because when re-building the image, the latest code is pulled from github.
Below is an example of my dockerfile:
FROM base
# Clone application
RUN git clone repo-url
# Install application requirements
RUN pip3 install -r app/requirements.txt
# ....
Here is my compose file:
myapp:
image: development
env_file: .env
ports:
- "8000:80"
volumes:
- .:/home/app
command: python3 manage.py runserver 0.0.0.0:8000
Is there any way to install newly added requirements after build on development?
There are two ways you can do this.
By hand
You can enter the container and do it yourself. Downside: not automated.
$ docker-compose exec myapp bash
2912d2cd9eab# pip3 install -r /home/app/requirements.txt
Using an entrypoint script
You can use an entrypoint script that runs prep work, then runs the command.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
# ... probably other stuff in here ...
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
cd /home/app
pip3 install -r requirements.txt
# May as well do this too, while we're here.
python3 manage.py migrate
exec "$#"
The entrypoint is run like this at container startup:
/entrypoint.sh $CMD
Which expands to:
/entrypoint.sh python3 manage.py runserver 0.0.0.0:8000
The prep work is run first, then at the end of the entrypoint script, the passed-in argument(s) are exec'd. That's your command, so entrypoint.sh exits and is replaced by your Django app server.
UPDATE:
After taking comments to chat, it came up that it is important to use exec to run the command, instead of running it at the end of the entrypoint script like this:
python3 manage.py runserver 0.0.0.0:8000
I can't exactly recall why it matters, but I ran into this previously as well. You need to exec the command or it will not work properly.
The way I solved this is by running two services:
server: run the server depends on requirements
requirements: installs requirements prior to running server
And this is how the docker-compose.yml file would look like:
version: '3'
services:
django:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
ports:
- 8000:8000
working_dir: /project
command: python manage.py runserver
depends_on:
- requirements
requirements:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
working_dir: /project
command: pip install -r requirements.txt
volumes:
pip37:
external: true
PS: I created a named volume for the pip modules so I can preserve them across different projects. You can create one yourself by running:
docker volume create mypipivolume

Categories

Resources