I'm developing some python microservices with grpc and i'm using docker for the cassandra database and the microservices. Is there a way to setup reload on change within docker-compose?
I'm guessing that first I need the code mounted as a volume but I don't see a way to reload on GRPC server like for example flask does.
We use watchdog[watchmedo] with our grpc services and Docker.
Install watchdog or add to your requirements.txt file
python -m pip install watchdog[watchmedo]
Then in your docker-compose.yml add watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/" python -- -m app to your container where --directory is the directory to where your app is contained inside the docker container, and python -- -m app is the file that starts your grpc Server. In this example the file that starts the server is called app.py:
app:
build:
context: ./app/
dockerfile: ./Dockerfile
target: app
command: watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/" python -- -m app
volumes:
- ./app/:/usr/src/app/
Related
I am developing a FastAPI app. It is running on Uvicorn in a Docker container using docker-compose.
I want to include some files other than *.py to trigger the auto reload while in development.
According to the docs Uvicorn needs the optional dependency WatchFiles installed to be able to use the --reload-include flag, which would enable me to include other file types to trigger a reload. However, when WatchFiles is installed (with Uvicorn confirming by printing this info at start up: Started reloader process [1] using WatchFiles) no auto reloads happen at all. Mind you, this is independent of changes to the run command, with or without the include flag.
Without WatchFiles installed, Uvicorn's default auto reload works as intended for just *.py files.
What I've got
This is the Dockerfile:
FROM python:3.10
WORKDIR /tmp
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade -r requirements.txt
WORKDIR /code
CMD ["uvicorn", "package.main:app", "--host", "0.0.0.0", "--port", "80", "--reload"]
This is the docker-compose.yml:
version: "3.9"
services:
fastapi-dev:
image: myimagename:${TAG:-latest}
build:
context: .
volumes:
- ./src:/code
- ./static:/static
- ./templates:/templates
restart: on-failure
ports:
- "${HTTP_PORT:-8080}:80"
(I need a docker-compose file because of some services required later on.)
The most basic FastAPI app:
from fastapi import FastAPI, HTTPException
app = FastAPI()
#app.get('/')
async def index():
raise HTTPException(418)
Mind you, this is probably of no concern as the problem does not seem to be related to FastAPI.
requirements.txt:
fastapi~=0.85
pydantic[email]~=1.10.2
validators~=0.20.0
uvicorn~=0.18
watchfiles
python-decouple==3.6
python-multipart
pyotp~=2.7
wheezy.template~=3.1
How did I try to resolve this issue?
I tried using command: uvicorn package.main:app --host 0.0.0.0 --port 80 --reload in docker-compose.yml instead of CMD [...] in the Dockerfile, which unsurprisingly changed nothing.
I created a file watch.py to test if WatchFiles works:
from watchfiles import watch
for changes in watch('/code', force_polling=True):
print(changes)
And…in fact it does work. Running it from the container in Docker CLI prints all the changes made. (python -m watch) And btw it works just as fine async/using asyncio. So it is probably nothing to do with the file system/share/mount within Docker.
So…
How do I fix it? What is wrong with Uvicorn(?) I need to check for other file types e.g. *.html in /templates. Do I have to get WatchFiles to work or are there other ways? If I do, how?
I just had the same problem and the problem is with WatchFiles.
In the watchfiles documentation it is understood that the detection relies on file system notifications, and I think that via docker its events are not launched when using a volume.
Notify will fall back to file polling if it can't use file system notifications
So you have to tell watchfiles to force the polling, that's what you did in your test python script with the parameter force_polling and that's why it works:
for changes in watch('/code', force_polling=True):
Fortunately in the documentation we are given the possibility to force the polling via an environment variable.
Add this environment variable to your docker-compose.yml and auto-reload will work:
services:
fastapi-dev:
image: myimagename:${TAG:-latest}
build:
context: .
volumes:
- ./src:/code
- ./static:/static
- ./templates:/templates
restart: on-failure
ports:
- "${HTTP_PORT:-8080}:80"
environment:
- WATCHFILES_FORCE_POLLING=true
I'm trying to create a simple web application container within Ubuntu-WSL2 with the help of the Docker. So I've built my container creating my-simple-webapp folder and within that folder, I've created Dockerfile and app.py files;
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y python python-pip
RUN pip install flask
COPY app.py /opt/
ENTRYPOINT FLASK_APP=/opt/app.py flask run --host=0.0.0.0 --port=8080
app.py
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def main():
return "Welcome!"
#app.route('/how are you')
def hello():
return 'I am good, how about you?'
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8080)
When I run the command docker build ./my-simple-webapp it works without error. However when I use my browser to connect my container typing 172.17.0.2:8080, o.o.o.o:8080 or localhost:8080 connection times out.
Resource : https://github.com/mmumshad/simple-webapp-flask
If all you run is docker build... then you still need to start your container with docker run....
You can open the docker dashboard (in your Windows tray) to see if your container is actually running.
To actually run your app you need to start a container. First, build the image:
docker build -t simple-webapp-flask .
Then start a container using the image, with 8080:8080 mapping from container to your host:
docker run -p 8080:8080 simple-webapp-flask
If you want to deploy your flask application, you need to choose from the following options:
https://flask.palletsprojects.com/en/1.1.x/deploying/
The way you are trying to do it, can be used only for development purposes.
I have a Django application whose docker image's Dockerfile is as follows:
FROM python:3.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip3 install -r requirements.txt
COPY . /code/
ENV PORT=8000
EXPOSE 8000
Then, I have a docker-compose.yml file for defining other container images and dependencies for my Django application as follows:
version: '3'
services:
web: &django_app
build: .
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- "80:8000"
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:latest
celery_worker:
<<: *django_app
command: celery -A DJingApp worker --loglevel=info
ports: []
depends_on:
- rabbitmq
As you can see above, I've got to have 3 containers(web, rabbitmq, and celery_worker) running at any point in time for my Django app to work.
So, how do I deploy this project's Docker images to AWS Elastic Beanstalk and run them out there? Are there any changes that I will have to make to my Dockerfile or docker-compose.yml? If yes, what are they?
It's quite challenging to deploy a multi-container app to Elastic Beanstalk. You need a Dockerrun.aws.json version 2 configuration file which is an Elastic Beanstalk–specific JSON file that describes how to deploy a set of Docker containers as an Elastic Beanstalk application.
If you don't know how to create the configuration file I suggest reviewing the official page.
And also you can use container transform utility to transform your docker-compose file to a Dockerrun.aws.json configuration file which I find very helpful. You may need to make some changes in the autogenerated file.
And also to customize your environment you need to use .ebextensions. Such as defining your Django settings path, WSGI path, web server configurations, executing Django management commands before deployment and so.
For detailed logs, I suggest using environment logs under:
Elastic Beanstalk - Environments - 'your-environment-name' - Logs
Note: I suggest using eb deploy for successful deployment since you need to deploy your source code in .zip file format.
I have a docker-compose config like the following:
version: '3.7'
services:
flask:
command: [python, app.py]
ports:
- "127.0.0.1:5000:5000"
frontend:
command: [sh, -c, "npm run start"]
ports:
- "127.0.0.1:7600:7600"
links:
- flask
The frontend container has a webpack development server running which proxies /api/* path requests to flask:5000 for processing. This works great when I use docker-compose up -d.
However, let's say I want to debug something in the flask app using pdb, and I run it manually instead using:
docker-compose stop flask
docker-compose run --rm --service-ports flask python app.py
Then suddenly, my frontend service cannot proxy request to my flask service and I'm getting an error like: Error occurred while trying to proxy request /testing from frontend:7600 to http://flask:5000 (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
What am I missing? How do I make this configuration work for interactive debugging of my python code?
Edit: I'm running Docker version 18.09.0, build 4d60db4 and docker-compose version 1.23.2, build 1110ad01
The problem is when you run manually, the container name is no longer flask, you can see that with docker network inspect your_network.
You need to first docker-compose rm flask to free the name flask, then docker-compose run --service-ports --name flask flask.
Finally, I don't know how you use pdb, but you can normally use docker-compose exec flask sh or docker attach container_id to get an interactive prompt.
I have a Docker container which runs a Flask application. When Flask receives and http request, I would like to trigger the execution of a new ephemeral Docker container which shutdowns once it completes what it has to do.
I have read Docker-in-Docker should be avoided so this new container should be run as a sibling container on my host and not within the Flask container.
What would be the solution to do this with docker-py?
we are doing stuff like this by mounting docker.sock as shared volume between the host machine and the container. This allows the container sending commands to the machine such as docker run
this is an example from our CI system:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Answering my own question. Here is a complete setup which works.
In one folder, create the following files:
requirements.txt
Dockerfile
docker-compose.yml
api.py
requirements.txt
docker==3.5.0
flask==1.0.2
Dockerfile
FROM python:3.7-alpine3.7
# Project files
ARG PROJECT_DIR=/srv/api
RUN mkdir -p $PROJECT_DIR
WORKDIR $PROJECT_DIR
COPY requirements.txt ./
# Install Python dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
docker-compose.yml
Make sure to mount docker.sock in volumes as mentioned in the previous answer above.
version: '3'
services:
api:
container_name: test
restart: always
image: test
build:
context: ./
volumes:
- ./:/srv/api/
- /var/run/docker.sock:/var/run/docker.sock
environment:
FLASK_APP: api.py
command: ["flask", "run", "--host=0.0.0.0"]
ports:
- 5000:5000
api.py
from flask import Flask
import docker
app = Flask(__name__)
#app.route("/")
def hello():
client = docker.from_env()
client.containers.run('alpine', 'echo hello world', detach=True, remove=True)
return "Hello World!"
Then open your browser and navigate to http://0.0.0.0:5000/
It will trigger the execution of the alpine container. If you don't already have the alpine image, it will take a bit of time the first time because Docker will automatically download the image.
The arguments detach=True allows to execute the container asynchronously so that Flask does not wait for the end of the process before returning its response.
The argument remove=True indicates Docker to remove the container once its execution is completed.