Hypercorn with "--reload" and Docker volumes - python

I am running Hypercorn with --reload inside a Docker container. The file I am running is kept in a volume managed by Docker Compose.
When I change the file on my system, I can see that the change is reflected in the volume, e.g. with docker compose exec myapp /bin/cat /app/runtime/service.py.
However, when I change a file in this way, Hypercorn does not restart as I would have expected. Is there some adverse interaction between Hypercorn and the Docker volume? Or am I expecting something from the --reload option that I should not expect?
Example files are below. My expectation was that modifying runtime/service.py from outside the container would trigger Hypercorn to restart the server with the modified version of the file. But this does not occur.
Edit: I should add that I am using Docker 20.10.5 via Docker Desktop for Mac, on MacOS 10.14.6.
Edit 2: This might be a Hypercorn bug. If I add uvicorn[standard] in requirements.txt and run python -m uvicorn --reload --host 0.0.0.0 --port 8001 service:app, the reloading works fine. Possibly related: https://gitlab.com/pgjones/hypercorn/-/issues/185
entrypoint.sh:
#!/bin/sh
cd /app/runtime
/opt/venv/bin/python -m hypercorn --reload --bind 0.0.0.0:8001 service:app
Dockerfile:
FROM $REDACTED
RUN /opt/venv/bin/python -m pip install -U pip
RUN /opt/venv/bin/pip install -U setuptools wheel
COPY requirements.txt /app/requirements.txt
RUN /opt/venv/bin/pip install -r /app/requirements.txt
COPY requirements-dev.txt /app/requirements-dev.txt
RUN /opt/venv/bin/pip install -r /app/requirements-dev.txt
COPY entrypoint.sh /app/entrypoint.sh
EXPOSE 8001/tcp
CMD ["/app/entrypoint.sh"]
docker-compose.yml:
version: "3.8"
services:
api:
container_name: api
hostname: myapp
build:
context: .
ports:
- 8001:8001
volumes:
- ./runtime:/app/runtime
runtime/service.py:
import logging
import quart
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
app = quart.Quart(__name__)
#app.route('/')
async def handle_hello():
logger.info('Handling request.')
return 'Hello, world!\n'
#app.route('/bad')
async def handle_bad():
logger.critical('Bad request.........')
raise RuntimeError('Oh no!!!')

Here is a minimal, fully working example which does auto-reload using hypercorn:
docker-compose.yaml
services:
app:
build: .
# Here --reload is used which works as intended!
command: hypercorn --bind 0.0.0.0:8080 --reload src:app
ports:
- 8080:8080
volumes:
- ./src:/app/src
Dockerfile
FROM python:3.10-slim-bullseye
WORKDIR /app
RUN pip install hypercorn==0.14.3 quart==0.18.0
COPY src ./src
EXPOSE 8080
ENV QUART_APP=src:app
# This is the production command; docker-compose.yaml overwrites it for local development
CMD hypercorn --bind 0.0.0.0:8080 src:app
src/__init__.py
from quart import Quart
app=Quart(__name__)
#app.route('/', methods=['GET'])
def get_root():
return "Hello world!"
Run via docker-compose up and notice how hypercorn reloads once __init__.py got modified.

You likely need to use a volume mount to get the reload functionality!
This is because when you build the container, it bakes whatever you have locally into it. Further changes only affect your local filesystem.
This is arguably not intended-use (as the container becomes dependent on external files!), but likely useful for faster testing/debugging
You could also directly edit the container by connecting to it, which you may find suits your needs.

Related

Uvicorn --reload using WatchFiles is not working for FastAPI in Docker container

I am developing a FastAPI app. It is running on Uvicorn in a Docker container using docker-compose.
I want to include some files other than *.py to trigger the auto reload while in development.
According to the docs Uvicorn needs the optional dependency WatchFiles installed to be able to use the --reload-include flag, which would enable me to include other file types to trigger a reload. However, when WatchFiles is installed (with Uvicorn confirming by printing this info at start up: Started reloader process [1] using WatchFiles) no auto reloads happen at all. Mind you, this is independent of changes to the run command, with or without the include flag.
Without WatchFiles installed, Uvicorn's default auto reload works as intended for just *.py files.
What I've got
This is the Dockerfile:
FROM python:3.10
WORKDIR /tmp
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade -r requirements.txt
WORKDIR /code
CMD ["uvicorn", "package.main:app", "--host", "0.0.0.0", "--port", "80", "--reload"]
This is the docker-compose.yml:
version: "3.9"
services:
fastapi-dev:
image: myimagename:${TAG:-latest}
build:
context: .
volumes:
- ./src:/code
- ./static:/static
- ./templates:/templates
restart: on-failure
ports:
- "${HTTP_PORT:-8080}:80"
(I need a docker-compose file because of some services required later on.)
The most basic FastAPI app:
from fastapi import FastAPI, HTTPException
app = FastAPI()
#app.get('/')
async def index():
raise HTTPException(418)
Mind you, this is probably of no concern as the problem does not seem to be related to FastAPI.
requirements.txt:
fastapi~=0.85
pydantic[email]~=1.10.2
validators~=0.20.0
uvicorn~=0.18
watchfiles
python-decouple==3.6
python-multipart
pyotp~=2.7
wheezy.template~=3.1
How did I try to resolve this issue?
I tried using command: uvicorn package.main:app --host 0.0.0.0 --port 80 --reload in docker-compose.yml instead of CMD [...] in the Dockerfile, which unsurprisingly changed nothing.
I created a file watch.py to test if WatchFiles works:
from watchfiles import watch
for changes in watch('/code', force_polling=True):
print(changes)
And…in fact it does work. Running it from the container in Docker CLI prints all the changes made. (python -m watch) And btw it works just as fine async/using asyncio. So it is probably nothing to do with the file system/share/mount within Docker.
So…
How do I fix it? What is wrong with Uvicorn(?) I need to check for other file types e.g. *.html in /templates. Do I have to get WatchFiles to work or are there other ways? If I do, how?
I just had the same problem and the problem is with WatchFiles.
In the watchfiles documentation it is understood that the detection relies on file system notifications, and I think that via docker its events are not launched when using a volume.
Notify will fall back to file polling if it can't use file system notifications
So you have to tell watchfiles to force the polling, that's what you did in your test python script with the parameter force_polling and that's why it works:
for changes in watch('/code', force_polling=True):
Fortunately in the documentation we are given the possibility to force the polling via an environment variable.
Add this environment variable to your docker-compose.yml and auto-reload will work:
services:
fastapi-dev:
image: myimagename:${TAG:-latest}
build:
context: .
volumes:
- ./src:/code
- ./static:/static
- ./templates:/templates
restart: on-failure
ports:
- "${HTTP_PORT:-8080}:80"
environment:
- WATCHFILES_FORCE_POLLING=true

Changes on template files inside volume not showing on Flask frontend

I am using a docker-compose Flask implementation with the following configuration
docker-compose:
version: '3'
services:
dashboard:
build:
context: dashboard/
args:
APP_PORT: "8080"
container_name: dashboard
ports:
- "8080:8080"
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: "8080"
volumes:
- ./dashboard/:/usr/src/app
dashboard/Dockerfile:
FROM python:3.7-slim-bullseye
ENV PYTHONUNBUFFERED True
ARG APP_PORT
ENV APP_HOME /usr/src/app
WORKDIR $APP_HOME
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
CMD exec gunicorn --bind :$APP_PORT --workers 1 --threads 8 --timeout 0 main:app
dashboard/main.py:
import os
from flask import Flask, render_template
app = Flask(__name__)
#app.route('/')
def index():
return render_template('index.html')
If I apply any change to the index.html file in my host system using VSCode, these changes won't apply when I refresh the page. However, I have tried getting into the container with docker exec -it dashboard bash and cat /usr/src/app/templates/index.html and they are reflected inside the container, since the volume is shared between the host and the container.
If I stop the container and run it again the changes are applied, but as I am working on frontend doing that all the time is pretty annoying.
Why the changes won't show on the browser but they are replicated on the container?
You should use: TEMPLATES_AUTO_RELOAD=True
From https://flask.palletsprojects.com/en/2.0.x/config/
It appears that the templates are preloaded and won't update until you enable this feature.

http://localhost/5000 not working in docker flask

can't open file '/web/manage.py': [Errno 2] No such file or directory
exited with code 2
NOTE: Tried all similar problems solution posted, did not work.
No matter what I do, not able to get http://localhost/5000 to work. Even if the above error goes away by removing volume and command from docker-container.
Below is docker-compose.yml
services:
web:
build: ./web
command: python /web/manage.py runserver 0.0.0.0:8000
volumes:
- './users:/usr/src/app'
ports:
- 5000:5000
env_file:
- ./.env.dev
Below is Dockerfile:
# pull official base image
FROM python:3.9.5-slim-buster
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
BELOW IS manage.py:
from flask.cli import FlaskGroup
from project import app
cli = FlaskGroup(app)
if __name__ == "__main__":
cli()
BELOW IS init.py:
from flask import Flask, jsonify
app = Flask(__name__)
#app.route("/")
def hello_world():
return jsonify(hello="world")
Below is the structure:
The ones marked in red appeared when I ran this command: docker-compose build
enter image description here
A couple of changes to do.
The cli command in your docker-compose.yml file needs to be:
command: python /usr/src/app/manage.py run -h 0.0.0.0 -p 8000
There the command name is run and not runserver. Also the host ip to bind and port to listen are configured as different command options.
Also the configured port mapping for the service needs to map to the container port from the command:
ports:
- 5000:8000
In your manage.py module, FlaskGroup should be provided create_app option which is factory not the app instance.
You can implement this as a lambda function.
cli = FlaskGroup(create_app=(lambda:app))
Edit
The source files are not mounted in the container volume that why you're getting "no such file manage.py".
You need to mount your source files in the container volume under /usr/src/app.
volumes:
- './web:/usr/src/app'

django runserver hangs in docker-compose up but runs correctly in docker-compose run

Edit
Adding --ipv6 to the command, while not properly configured for, seem to surpass the point where the process hangs.
Problem
Calling docker-compose up executes runserver but hangs at some point after printing the current time.
Calling docker-compose run -p 8000:8000 web python manage.py runserver 0.0.0.0:8000 also execute the server, but does so succesfully and can be reached at 192.168.99.100:8000.
Questions
How come I can run the server directly from docker-compose in my shell but not from the .yml file?
To me, the content of the .yml file and the docker-compose run line from the shell are strikingly similar.
The only difference I can think of would perhaps be permissions at some level required to properly start a django server, but I don't know how to address that. Docker runs on a windows 8.1 machine. The shared folder for my virtual machine is the default c:\Users.
Files
My folder contain a fresh django project as well as these docker files. I've tampered with different versions of python and django but the result is the same. I've cleaned up my images and containers between attempts using
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
Dockerfile
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
requirements.txt
Django>=1.8,<2.0
System
My operative system is windows 8.1
I was hit by this issue myself and it seems that you need to allocate a tty and a stdin to your container in order to make runserver work:
python:
image: my-image:latest
stdin_open: true # docker run -i
tty: true # docker run -t
build:
context: ..
dockerfile: docker/Dockerfile
I had the same issue and could not get it to do anything else. However when i went to the ip of the docker machine docker-machine ip it returned 192.168.99.100, then by going to 192.168.99.100:8000 my docker container started receiving the requests

Run a docker container from an existing container using docker-py

I have a Docker container which runs a Flask application. When Flask receives and http request, I would like to trigger the execution of a new ephemeral Docker container which shutdowns once it completes what it has to do.
I have read Docker-in-Docker should be avoided so this new container should be run as a sibling container on my host and not within the Flask container.
What would be the solution to do this with docker-py?
we are doing stuff like this by mounting docker.sock as shared volume between the host machine and the container. This allows the container sending commands to the machine such as docker run
this is an example from our CI system:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Answering my own question. Here is a complete setup which works.
In one folder, create the following files:
requirements.txt
Dockerfile
docker-compose.yml
api.py
requirements.txt
docker==3.5.0
flask==1.0.2
Dockerfile
FROM python:3.7-alpine3.7
# Project files
ARG PROJECT_DIR=/srv/api
RUN mkdir -p $PROJECT_DIR
WORKDIR $PROJECT_DIR
COPY requirements.txt ./
# Install Python dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
docker-compose.yml
Make sure to mount docker.sock in volumes as mentioned in the previous answer above.
version: '3'
services:
api:
container_name: test
restart: always
image: test
build:
context: ./
volumes:
- ./:/srv/api/
- /var/run/docker.sock:/var/run/docker.sock
environment:
FLASK_APP: api.py
command: ["flask", "run", "--host=0.0.0.0"]
ports:
- 5000:5000
api.py
from flask import Flask
import docker
app = Flask(__name__)
#app.route("/")
def hello():
client = docker.from_env()
client.containers.run('alpine', 'echo hello world', detach=True, remove=True)
return "Hello World!"
Then open your browser and navigate to http://0.0.0.0:5000/
It will trigger the execution of the alpine container. If you don't already have the alpine image, it will take a bit of time the first time because Docker will automatically download the image.
The arguments detach=True allows to execute the container asynchronously so that Flask does not wait for the end of the process before returning its response.
The argument remove=True indicates Docker to remove the container once its execution is completed.

Categories

Resources