I have a Flask application which works fine in a venv, but fails as a Docker image because the class-based config file cannot be found.
|app
|__init__.py
|main.py
|Dockerfile
|config.py
The init.py file:
app = Flask(__name__)
if app.config["ENV"] == "development":
app.config.from_object("config.DevelopmentConfig")
elif app.config["ENV"] == "production":
app.config.from_object('config.ProductionConfig')
else:
app.config.from_object("config.DevelopmentConfig")
Stripped down config file:
import os
from pymongo import MongoClient
import pymongo
class Config(object):
DEBUG = False
TESTING = False
class ProductionConfig():
...
class DevelopmentConfig(Config):
...
Error message:
werkzeug.utils.ImportStringError: import_string() failed for 'config.ProductionConfig'. Possible reasons are:
Docker file adapted from Google
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.10-slim
# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True
ENV PORT 8080
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
# RUN pip install --no-cache-dir -r requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
# Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
Things I've tried:
Placing the config file in the app directory
Including app.config.from_pyfile('config.py') below the from_object
Following the instance structure at https://flask.palletsprojects.com/en/2.2.x/config/
Any help to unpick this is appreciated.
Related
I have a basic flask API to execute a python file.
Structure is as follows:
app.py
Dockerfile
requirements.txt
test.py
app.py:
from flask import Flask, request
import subprocess
import os
app = Flask(__name__)
#app.route("/execute", methods=["GET"])
def execute():
result = subprocess.run(["python", "test.py"], capture_output=True)
return result.stdout
if __name__ == "__main__":
app.run(port=int(os.environ.get("PORT", 8080)),host='0.0.0.0',debug=True)
Dockerfile:
FROM python:3.8-slim-buster
WORKDIR /app
COPY . .
RUN pip install flask
RUN pip install -r requirements.txt --no-cache
EXPOSE 8080
CMD ["python", "app.py"]
test.py:
Python script that copies one document from a mongodb collection to another as a test.
The app runs on local machine.
Steps I followed in order to deploy to cloud run on gcloud:
docker build -t .
docker tag gcr.io//
docker push gcr.io//
gcloud run deploy --image gcr.io// --platform managed --command="python app.py"
Error on step 4. When I look at the logs the error returned are as follows:
terminated: Application failed to start: kernel init: cannot resolve init executable: error finding executable "python app.py" in PATH [/usr/local/bin /usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /bin]: no such file or directory
Please note I am on a windows machine and the Path in the error looks like a Linux path so I am not sure where to go from here
It looks like you are overriding the entrypoint of your docker image via the gcloud command.
You should not need to do so since it is already set in the Dockerfile.
Try changing the 4. step to:
gcloud run deploy --image gcr.io// --platform managed
Note
Looking at the error it seams that passing --command="python app.py" is changing the CMD command of your Dockerfile to something like
CMD ["python app.py"]
This is interpreted as a single executable called python app.py which is of course not found (since the executable is python and app.py is just an argument you want to pass to it.
Also as a sidenote I would suggest changing the last line of the Dockerfile to be an ENTRYPOINT instead of CMD:
FROM python:3.8-slim-buster
WORKDIR /app
COPY . .
RUN pip install flask
RUN pip install -r requirements.txt --no-cache
EXPOSE 8080
ENTRYPOINT ["python", "app.py"]
See here for some details
I have been able to successfully deploy to cloud run using the following, however when accessing the deployed API it returns a 404 error. Any suggestions will be appreciated.
I switch to Waitress (Waitress is meant to be a production-quality pure-Python WSGI server).
app.py
from flask import Flask, request
import subprocess
app = Flask(__name__)
#app.route("/run_script", methods=["GET", "POST"])
def run_script():
result = subprocess.run(["python", "test.py"], capture_output=True)
return result.stdout
if __name__ == "__main__":
from waitress import serve
serve(app, host='0.0.0.0', port=8080)
Dockerfile:
FROM python:3.9
WORKDIR /app
COPY . .
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "app.py"]
test.py:
df_AS = db.collectionName
#convert entire collection to Pandas dataframe
df_AS = pd.DataFrame(list(df_AS.find()))
newCollection.insert_many(df_AS.to_dict("records"))
this successfully deployed however the end point is not included in the url and have to be manually inserted at the end of the url. is this normal?
so actually, i want to run a pygame application using docker container. however, when i run the docker and click the link at the terminal, it opens a tab and it says : "The webpage at http://0.0.0.0:8000/ might be temporarily down or it may have moved permanently to a new web address."
here's the aliens.py github link: https://github.com/xamox/pygame/blob/master/examples/aliens.py
in the aliens.py file, I added some code into it:
from fastapi import FastAPI
import uvicorn
app = FastAPI()
and
if __name__ == '__main__': uvicorn.run(app, port=8000, host="0.0.0.0")
and for the code of the Dockerfile file that I have created:
Python FROM:3.10
WORKDIR /fastapi-app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./app ./app
CMD["python", "./app/aliens.py"]
Is the problem is in the IP address of the host?
Dockerfile
# python FROM:3.10 <----
FROM python:3.10
WORKDIR /fastapi-app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./app ./app
#CMD["python", "./app/aliens.py"]
# ^ whitespace missing
CMD ["python", "./app/aliens.py"]
aliens.py
from fastapi import FastAPI
import uvicorn
app = FastAPI()
if __name__ == '__main__':
uvicorn.run(app, port=8000, host="0.0.0.0")
Testing without aliens-code
# build
$ docker build -t my-app .
# run
$ docker run -d -rm --name mayapp -p 80:8000 my-app
# 80 host-port
# 8000 container-port
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
5ba7b461e92e my-app "python ./app/aliens…" 3 seconds ago Up 1 second 0.0.0.0:80->8000/tcp mayapp
http://localhost/docs
I am trying to run a go web server and a flask server in the same docker container. I have 1 Docker file to build the flask app. How can I update the Dockerfile to build a container to run both python and golang.
ProjectFolder
pyfolder
/app.py,
Dockerfile
main.go
main.go
package main
import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Println("func called")
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
app.py
from flask import Flask
import os
app = Flask(__name__)
#app.route("/")
def hello():
return "Flask inside Docker!!"
if __name__ == "__main__":
port = int(os.environ.get("PORT", 5000))
app.run(debug=True,host='0.0.0.0',port=port)
Dockerfile
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
Since you have two separate programs, you would typically run these in two containers, with two separate images. In both cases you can use a basic Dockerfile for their respective languages:
# pyfolder/Dockerfile
FROM python:3.6
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt .
COPY . .
EXPOSE 5000
CMD ["./app.py"]
# ./Dockerfile
FROM golang:1.16-alpine AS build
COPY go.mod go.sum .
RUN go mod download
COPY main.go .
RUN go build -o main main.go
FROM alpine
COPY --from=build /go/main /usr/bin/main
EXPOSE 8080
CMD ["main"]
You don't discuss at all why you have two containers or how they communicate. You'll frequently want containers to have no local state at all if they can manage it, and communicate only over network interfaces like HTTP. This means you'll need some way to configure the network address one service uses to call another, since it will be different running in a local development environment vs. running in containers (vs. deployed to the cloud, vs. running in Kubernetes, vs....) An environment variable would be a typical approach; say the Go code needs to call the Python code:
url := os.Getenv("PYAPP_URL")
if url == "" {
url = "http://localhost:8080"
}
resp, err := http.Get(url)
A typical tool to run multiple containers together would be Docker Compose. It's not the only tool, but it's a standard part of the Docker ecosystem and it's simpler than many of the alternatives. You'd write a YAML file describing the two containers:
version: '3.8'
services:
pyapp:
build: ./pyfolder
server:
build: . # (the directory containing the Go code and its Dockerfile)
environment:
- PYAPP_URL=http://pyapp:5000
ports:
- '8080:8080'
depends_on:
- pyapp
Running docker-compose up --build will build the two images and start the two containers.
You should be able to accomplish this using a multistage build in your Dockerfile.
First, move the Dockerfile to the root directory next to the main.go.
Next, modify your Dockerfile to
# grab the golang image
FROM golang:1.16
WORKDIR .
RUN go build -o app .
# python portion
FROM python:3.6
WORKDIR /app
# copy the go binary from stage 0
COPY --from=0 app ./
# start the go binary
CMD ["sh", "app"]
# start python
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
I do think The Fool is correct in suggesting to separate these into two different containers. I would look into docker-compose instead of using a multi-stage build for this use.
Docker Multistage Build Reference
can't open file '/web/manage.py': [Errno 2] No such file or directory
exited with code 2
NOTE: Tried all similar problems solution posted, did not work.
No matter what I do, not able to get http://localhost/5000 to work. Even if the above error goes away by removing volume and command from docker-container.
Below is docker-compose.yml
services:
web:
build: ./web
command: python /web/manage.py runserver 0.0.0.0:8000
volumes:
- './users:/usr/src/app'
ports:
- 5000:5000
env_file:
- ./.env.dev
Below is Dockerfile:
# pull official base image
FROM python:3.9.5-slim-buster
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
BELOW IS manage.py:
from flask.cli import FlaskGroup
from project import app
cli = FlaskGroup(app)
if __name__ == "__main__":
cli()
BELOW IS init.py:
from flask import Flask, jsonify
app = Flask(__name__)
#app.route("/")
def hello_world():
return jsonify(hello="world")
Below is the structure:
The ones marked in red appeared when I ran this command: docker-compose build
enter image description here
A couple of changes to do.
The cli command in your docker-compose.yml file needs to be:
command: python /usr/src/app/manage.py run -h 0.0.0.0 -p 8000
There the command name is run and not runserver. Also the host ip to bind and port to listen are configured as different command options.
Also the configured port mapping for the service needs to map to the container port from the command:
ports:
- 5000:8000
In your manage.py module, FlaskGroup should be provided create_app option which is factory not the app instance.
You can implement this as a lambda function.
cli = FlaskGroup(create_app=(lambda:app))
Edit
The source files are not mounted in the container volume that why you're getting "no such file manage.py".
You need to mount your source files in the container volume under /usr/src/app.
volumes:
- './web:/usr/src/app'
I am trying to create a new app that is written in Python Flask, run by gunicorn and then dockerised.
The problem I have is the performance inside the docker container is very poor, inconsistent and I do eventually get a response but I can't understand why the performance is decreasing. Sometimes I see in the logs [CRITICAL] WORKER TIMEOUT (pid:9).
Here is my app:
server.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
return "The server is running!"
if __name__ == '__main__':
app.run()
Dockerfile
FROM python:3.6.9-slim
# Copy all the files to the src folder
COPY build/ /usr/src/
# Create the virtual environment
RUN python3 -m venv /usr/src/myapp_venv
# Install the requirements
RUN /usr/src/myapp_venv/bin/pip3 install -r /usr/src/requirements.txt
# Runs gunicorn
# --chdir sets the directory where gunicorn should look for the server files
# server:app means run the "server.py" file and look for the "app" constructor within that
ENTRYPOINT ["/usr/src/myapp/bin/gunicorn", "--bind", "0.0.0.0:5000", "--workers", "1", "--chdir", "/usr/src/", "server:app"]
# Expose the gunicorn port
EXPOSE 5000
requirements.txt
Click==7.0
Flask==1.1.1
gunicorn==20.0.0
itsdangerous==1.1.0
Jinja2==2.10.3
MarkupSafe==1.1.1
Werkzeug==0.16.0
I run the docker container like this:
docker build -t killerkode/myapp .
docker run --name myapp -p 5000:5000 killerkode/myapp
I managed to find this helpful article which explains why Gunicorn hangs.
https://pythonspeed.com/articles/gunicorn-in-docker/
The solution for me was to change the worker temp directory and increase the minimum workers to 2. I still see workers being killed off but there is no longer any delays / slowness. I suspect adding in the gthread will improve things further.
Here is my updated Dockerfile:
FROM python:3.6.9-slim
# Copy all the files to the src folder
COPY build/ /usr/src/
# Create the virtual environment
RUN python3 -m venv /usr/src/myapp_venv
# Install the requirements
RUN /usr/src/myapp_venv/bin/pip3 install -r /usr/src/requirements.txt
# Runs gunicorn
# --chdir sets the directory where gunicorn should look for the server files
# server:app means run the "server.py" file and look for the "app" constructor within that
ENTRYPOINT ["/usr/src/myapp/bin/gunicorn", "--bind", "0.0.0.0:5000", "--worker-tmp-dir", "/dev/shm", "--workers", "2", "--chdir", "/usr/src/", "server:app"]
# Expose the gunicorn port
EXPOSE 5000