Run a docker container from an existing container using docker-py - python

I have a Docker container which runs a Flask application. When Flask receives and http request, I would like to trigger the execution of a new ephemeral Docker container which shutdowns once it completes what it has to do.
I have read Docker-in-Docker should be avoided so this new container should be run as a sibling container on my host and not within the Flask container.
What would be the solution to do this with docker-py?

we are doing stuff like this by mounting docker.sock as shared volume between the host machine and the container. This allows the container sending commands to the machine such as docker run
this is an example from our CI system:
volumes:
- /var/run/docker.sock:/var/run/docker.sock

Answering my own question. Here is a complete setup which works.
In one folder, create the following files:
requirements.txt
Dockerfile
docker-compose.yml
api.py
requirements.txt
docker==3.5.0
flask==1.0.2
Dockerfile
FROM python:3.7-alpine3.7
# Project files
ARG PROJECT_DIR=/srv/api
RUN mkdir -p $PROJECT_DIR
WORKDIR $PROJECT_DIR
COPY requirements.txt ./
# Install Python dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
docker-compose.yml
Make sure to mount docker.sock in volumes as mentioned in the previous answer above.
version: '3'
services:
api:
container_name: test
restart: always
image: test
build:
context: ./
volumes:
- ./:/srv/api/
- /var/run/docker.sock:/var/run/docker.sock
environment:
FLASK_APP: api.py
command: ["flask", "run", "--host=0.0.0.0"]
ports:
- 5000:5000
api.py
from flask import Flask
import docker
app = Flask(__name__)
#app.route("/")
def hello():
client = docker.from_env()
client.containers.run('alpine', 'echo hello world', detach=True, remove=True)
return "Hello World!"
Then open your browser and navigate to http://0.0.0.0:5000/
It will trigger the execution of the alpine container. If you don't already have the alpine image, it will take a bit of time the first time because Docker will automatically download the image.
The arguments detach=True allows to execute the container asynchronously so that Flask does not wait for the end of the process before returning its response.
The argument remove=True indicates Docker to remove the container once its execution is completed.

Related

How to run a command in docker-compose after a service run?

I have searched but I couldn't find a solution for my problem. My docker-compose.yml file as below.
#
version: '2.1'
services:
mongo:
image: mongo_db
build: mongo_image
container_name: my_mongodb
restart: always
networks:
- isolated_network
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
entrypoint: ["python3", "/tmp/script/get_api_to_mongodb.py", "&"]
networks:
isolated_network:
So here I use a custom Dockerfile. And my Dockerfile is like below.
FROM mongo:latest
RUN apt-get update -y
RUN apt-get install python3-pip -y
RUN pip3 install requests
RUN pip3 install pymongo
RUN apt-get clean -y
RUN mkdir -p /tmp/script
COPY get_api_to_mongodb.py /tmp/script/get_api_to_mongodb.py
#CMD ["python3","/tmp/script/get_api_to_mongodb.py","&"]
Here I want to create a container which have MongoDB and after create the container I collect a data using an API and send the data to MongoDB. But when I run the python script in that time mongodb is not initialized. So I need to run my script after container is created and right after mongodb initialized. Thanks in advance.
You should run this script as a separate container. It's not "part of the database", like an extension or plugin, but rather an ordinary client process that happens to connect to the database and that you want to run relatively early on. In general, if you're thinking about trying to launch a background process in a container, it's often a better approach to run foreground processes in two separate containers.
This setup means you can use a simpler Dockerfile that starts from an image with Python preinstalled:
FROM python:3.10
RUN pip install requests pymongo
WORKDIR /app
COPY get_api_to_mongodb.py .
CMD ["./get_api_to_mongodb.py"]
Then in your Compose setup, declare this as a second container alongside the first one. Since the script is in its own image, you can use the unmodified mongo image.
version: '2.4'
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
loader:
build: .
restart: on-failure
depends_on:
- mongodb
# environment:
# - MONGO_HOST=mongo
# - MONGO_USERNAME=root
# - MONGO_PASSWORD=root_pw
Note that the loader will re-run every time you run docker-compose up -d. You also may have to wait for the database to do its initialization before you can run the loader process; see Docker Compose wait for container X before starting Y.
It's likely you have an existing Compose service for your real application
version: '2.4'
services:
mongo: { ... }
app:
build: .
...
If that image contains the loader script, then you can docker-compose run it. This launches a new temporary container, using most of the attributes from the Compose service declaration, but you provide an alternate command: and the ports: are ignored.
docker-compose run app ./get_api_to_mongodb.py
One might ideally like a workflow where first the database container starts; then once it's accepting requests, run the loader script as a temporary container; then once that's completed start the main application server. This is mostly beyond Compose's capabilities, though you can probably get close with a combination of extended depends_on: declarations and a healthcheck: for the database.

Changes on template files inside volume not showing on Flask frontend

I am using a docker-compose Flask implementation with the following configuration
docker-compose:
version: '3'
services:
dashboard:
build:
context: dashboard/
args:
APP_PORT: "8080"
container_name: dashboard
ports:
- "8080:8080"
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: "8080"
volumes:
- ./dashboard/:/usr/src/app
dashboard/Dockerfile:
FROM python:3.7-slim-bullseye
ENV PYTHONUNBUFFERED True
ARG APP_PORT
ENV APP_HOME /usr/src/app
WORKDIR $APP_HOME
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
CMD exec gunicorn --bind :$APP_PORT --workers 1 --threads 8 --timeout 0 main:app
dashboard/main.py:
import os
from flask import Flask, render_template
app = Flask(__name__)
#app.route('/')
def index():
return render_template('index.html')
If I apply any change to the index.html file in my host system using VSCode, these changes won't apply when I refresh the page. However, I have tried getting into the container with docker exec -it dashboard bash and cat /usr/src/app/templates/index.html and they are reflected inside the container, since the volume is shared between the host and the container.
If I stop the container and run it again the changes are applied, but as I am working on frontend doing that all the time is pretty annoying.
Why the changes won't show on the browser but they are replicated on the container?
You should use: TEMPLATES_AUTO_RELOAD=True
From https://flask.palletsprojects.com/en/2.0.x/config/
It appears that the templates are preloaded and won't update until you enable this feature.

run flask and golang server in docker

I am trying to run a go web server and a flask server in the same docker container. I have 1 Docker file to build the flask app. How can I update the Dockerfile to build a container to run both python and golang.
ProjectFolder
pyfolder
/app.py,
Dockerfile
main.go
main.go
package main
import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Println("func called")
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
app.py
from flask import Flask
import os
app = Flask(__name__)
#app.route("/")
def hello():
return "Flask inside Docker!!"
if __name__ == "__main__":
port = int(os.environ.get("PORT", 5000))
app.run(debug=True,host='0.0.0.0',port=port)
Dockerfile
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
Since you have two separate programs, you would typically run these in two containers, with two separate images. In both cases you can use a basic Dockerfile for their respective languages:
# pyfolder/Dockerfile
FROM python:3.6
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt .
COPY . .
EXPOSE 5000
CMD ["./app.py"]
# ./Dockerfile
FROM golang:1.16-alpine AS build
COPY go.mod go.sum .
RUN go mod download
COPY main.go .
RUN go build -o main main.go
FROM alpine
COPY --from=build /go/main /usr/bin/main
EXPOSE 8080
CMD ["main"]
You don't discuss at all why you have two containers or how they communicate. You'll frequently want containers to have no local state at all if they can manage it, and communicate only over network interfaces like HTTP. This means you'll need some way to configure the network address one service uses to call another, since it will be different running in a local development environment vs. running in containers (vs. deployed to the cloud, vs. running in Kubernetes, vs....) An environment variable would be a typical approach; say the Go code needs to call the Python code:
url := os.Getenv("PYAPP_URL")
if url == "" {
url = "http://localhost:8080"
}
resp, err := http.Get(url)
A typical tool to run multiple containers together would be Docker Compose. It's not the only tool, but it's a standard part of the Docker ecosystem and it's simpler than many of the alternatives. You'd write a YAML file describing the two containers:
version: '3.8'
services:
pyapp:
build: ./pyfolder
server:
build: . # (the directory containing the Go code and its Dockerfile)
environment:
- PYAPP_URL=http://pyapp:5000
ports:
- '8080:8080'
depends_on:
- pyapp
Running docker-compose up --build will build the two images and start the two containers.
You should be able to accomplish this using a multistage build in your Dockerfile.
First, move the Dockerfile to the root directory next to the main.go.
Next, modify your Dockerfile to
# grab the golang image
FROM golang:1.16
WORKDIR .
RUN go build -o app .
# python portion
FROM python:3.6
WORKDIR /app
# copy the go binary from stage 0
COPY --from=0 app ./
# start the go binary
CMD ["sh", "app"]
# start python
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
I do think The Fool is correct in suggesting to separate these into two different containers. I would look into docker-compose instead of using a multi-stage build for this use.
Docker Multistage Build Reference

Hypercorn with "--reload" and Docker volumes

I am running Hypercorn with --reload inside a Docker container. The file I am running is kept in a volume managed by Docker Compose.
When I change the file on my system, I can see that the change is reflected in the volume, e.g. with docker compose exec myapp /bin/cat /app/runtime/service.py.
However, when I change a file in this way, Hypercorn does not restart as I would have expected. Is there some adverse interaction between Hypercorn and the Docker volume? Or am I expecting something from the --reload option that I should not expect?
Example files are below. My expectation was that modifying runtime/service.py from outside the container would trigger Hypercorn to restart the server with the modified version of the file. But this does not occur.
Edit: I should add that I am using Docker 20.10.5 via Docker Desktop for Mac, on MacOS 10.14.6.
Edit 2: This might be a Hypercorn bug. If I add uvicorn[standard] in requirements.txt and run python -m uvicorn --reload --host 0.0.0.0 --port 8001 service:app, the reloading works fine. Possibly related: https://gitlab.com/pgjones/hypercorn/-/issues/185
entrypoint.sh:
#!/bin/sh
cd /app/runtime
/opt/venv/bin/python -m hypercorn --reload --bind 0.0.0.0:8001 service:app
Dockerfile:
FROM $REDACTED
RUN /opt/venv/bin/python -m pip install -U pip
RUN /opt/venv/bin/pip install -U setuptools wheel
COPY requirements.txt /app/requirements.txt
RUN /opt/venv/bin/pip install -r /app/requirements.txt
COPY requirements-dev.txt /app/requirements-dev.txt
RUN /opt/venv/bin/pip install -r /app/requirements-dev.txt
COPY entrypoint.sh /app/entrypoint.sh
EXPOSE 8001/tcp
CMD ["/app/entrypoint.sh"]
docker-compose.yml:
version: "3.8"
services:
api:
container_name: api
hostname: myapp
build:
context: .
ports:
- 8001:8001
volumes:
- ./runtime:/app/runtime
runtime/service.py:
import logging
import quart
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
app = quart.Quart(__name__)
#app.route('/')
async def handle_hello():
logger.info('Handling request.')
return 'Hello, world!\n'
#app.route('/bad')
async def handle_bad():
logger.critical('Bad request.........')
raise RuntimeError('Oh no!!!')
Here is a minimal, fully working example which does auto-reload using hypercorn:
docker-compose.yaml
services:
app:
build: .
# Here --reload is used which works as intended!
command: hypercorn --bind 0.0.0.0:8080 --reload src:app
ports:
- 8080:8080
volumes:
- ./src:/app/src
Dockerfile
FROM python:3.10-slim-bullseye
WORKDIR /app
RUN pip install hypercorn==0.14.3 quart==0.18.0
COPY src ./src
EXPOSE 8080
ENV QUART_APP=src:app
# This is the production command; docker-compose.yaml overwrites it for local development
CMD hypercorn --bind 0.0.0.0:8080 src:app
src/__init__.py
from quart import Quart
app=Quart(__name__)
#app.route('/', methods=['GET'])
def get_root():
return "Hello world!"
Run via docker-compose up and notice how hypercorn reloads once __init__.py got modified.
You likely need to use a volume mount to get the reload functionality!
This is because when you build the container, it bakes whatever you have locally into it. Further changes only affect your local filesystem.
This is arguably not intended-use (as the container becomes dependent on external files!), but likely useful for faster testing/debugging
You could also directly edit the container by connecting to it, which you may find suits your needs.

http://localhost/5000 not working in docker flask

can't open file '/web/manage.py': [Errno 2] No such file or directory
exited with code 2
NOTE: Tried all similar problems solution posted, did not work.
No matter what I do, not able to get http://localhost/5000 to work. Even if the above error goes away by removing volume and command from docker-container.
Below is docker-compose.yml
services:
web:
build: ./web
command: python /web/manage.py runserver 0.0.0.0:8000
volumes:
- './users:/usr/src/app'
ports:
- 5000:5000
env_file:
- ./.env.dev
Below is Dockerfile:
# pull official base image
FROM python:3.9.5-slim-buster
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
BELOW IS manage.py:
from flask.cli import FlaskGroup
from project import app
cli = FlaskGroup(app)
if __name__ == "__main__":
cli()
BELOW IS init.py:
from flask import Flask, jsonify
app = Flask(__name__)
#app.route("/")
def hello_world():
return jsonify(hello="world")
Below is the structure:
The ones marked in red appeared when I ran this command: docker-compose build
enter image description here
A couple of changes to do.
The cli command in your docker-compose.yml file needs to be:
command: python /usr/src/app/manage.py run -h 0.0.0.0 -p 8000
There the command name is run and not runserver. Also the host ip to bind and port to listen are configured as different command options.
Also the configured port mapping for the service needs to map to the container port from the command:
ports:
- 5000:8000
In your manage.py module, FlaskGroup should be provided create_app option which is factory not the app instance.
You can implement this as a lambda function.
cli = FlaskGroup(create_app=(lambda:app))
Edit
The source files are not mounted in the container volume that why you're getting "no such file manage.py".
You need to mount your source files in the container volume under /usr/src/app.
volumes:
- './web:/usr/src/app'

Categories

Resources