Cloud Run Python Script triggered by a Cloud Scheduler - python

I am stuck trying to compose a Docker Build to Cloud Run. In testing I run the following DockerFile and schedule this in Cron using a bash script and Docker run --rm -d command
Dockerfile
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN CFLAGS=-O0 pip install --no-cache-dir -v -r requirements.txt
COPY . .
CMD [ "python", "./Risklist.py" ]
Risklist.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
task = run_task()
return task
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
I run the run_task() as a script that creates various files and folders and I have the fucntions above the app.route.
Is this correct as it doesn't seem to be working?

Related

Deploying Flask API to 'cloud run' result in error using gcloud run deploy

I have a basic flask API to execute a python file.
Structure is as follows:
app.py
Dockerfile
requirements.txt
test.py
app.py:
from flask import Flask, request
import subprocess
import os
app = Flask(__name__)
#app.route("/execute", methods=["GET"])
def execute():
result = subprocess.run(["python", "test.py"], capture_output=True)
return result.stdout
if __name__ == "__main__":
app.run(port=int(os.environ.get("PORT", 8080)),host='0.0.0.0',debug=True)
Dockerfile:
FROM python:3.8-slim-buster
WORKDIR /app
COPY . .
RUN pip install flask
RUN pip install -r requirements.txt --no-cache
EXPOSE 8080
CMD ["python", "app.py"]
test.py:
Python script that copies one document from a mongodb collection to another as a test.
The app runs on local machine.
Steps I followed in order to deploy to cloud run on gcloud:
docker build -t .
docker tag gcr.io//
docker push gcr.io//
gcloud run deploy --image gcr.io// --platform managed --command="python app.py"
Error on step 4. When I look at the logs the error returned are as follows:
terminated: Application failed to start: kernel init: cannot resolve init executable: error finding executable "python app.py" in PATH [/usr/local/bin /usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /bin]: no such file or directory
Please note I am on a windows machine and the Path in the error looks like a Linux path so I am not sure where to go from here
It looks like you are overriding the entrypoint of your docker image via the gcloud command.
You should not need to do so since it is already set in the Dockerfile.
Try changing the 4. step to:
gcloud run deploy --image gcr.io// --platform managed
Note
Looking at the error it seams that passing --command="python app.py" is changing the CMD command of your Dockerfile to something like
CMD ["python app.py"]
This is interpreted as a single executable called python app.py which is of course not found (since the executable is python and app.py is just an argument you want to pass to it.
Also as a sidenote I would suggest changing the last line of the Dockerfile to be an ENTRYPOINT instead of CMD:
FROM python:3.8-slim-buster
WORKDIR /app
COPY . .
RUN pip install flask
RUN pip install -r requirements.txt --no-cache
EXPOSE 8080
ENTRYPOINT ["python", "app.py"]
See here for some details
I have been able to successfully deploy to cloud run using the following, however when accessing the deployed API it returns a 404 error. Any suggestions will be appreciated.
I switch to Waitress (Waitress is meant to be a production-quality pure-Python WSGI server).
app.py
from flask import Flask, request
import subprocess
app = Flask(__name__)
#app.route("/run_script", methods=["GET", "POST"])
def run_script():
result = subprocess.run(["python", "test.py"], capture_output=True)
return result.stdout
if __name__ == "__main__":
from waitress import serve
serve(app, host='0.0.0.0', port=8080)
Dockerfile:
FROM python:3.9
WORKDIR /app
COPY . .
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "app.py"]
test.py:
df_AS = db.collectionName
#convert entire collection to Pandas dataframe
df_AS = pd.DataFrame(list(df_AS.find()))
newCollection.insert_many(df_AS.to_dict("records"))
this successfully deployed however the end point is not included in the url and have to be manually inserted at the end of the url. is this normal?

Docker container cant run pygame

so actually, i want to run a pygame application using docker container. however, when i run the docker and click the link at the terminal, it opens a tab and it says : "The webpage at http://0.0.0.0:8000/ might be temporarily down or it may have moved permanently to a new web address."
here's the aliens.py github link: https://github.com/xamox/pygame/blob/master/examples/aliens.py
in the aliens.py file, I added some code into it:
from fastapi import FastAPI
import uvicorn
app = FastAPI()
and
if __name__ == '__main__': uvicorn.run(app, port=8000, host="0.0.0.0")
and for the code of the Dockerfile file that I have created:
Python FROM:3.10
WORKDIR /fastapi-app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./app ./app
CMD["python", "./app/aliens.py"]
Is the problem is in the IP address of the host?
Dockerfile
# python FROM:3.10 <----
FROM python:3.10
WORKDIR /fastapi-app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./app ./app
#CMD["python", "./app/aliens.py"]
# ^ whitespace missing
CMD ["python", "./app/aliens.py"]
aliens.py
from fastapi import FastAPI
import uvicorn
app = FastAPI()
if __name__ == '__main__':
uvicorn.run(app, port=8000, host="0.0.0.0")
Testing without aliens-code
# build
$ docker build -t my-app .
# run
$ docker run -d -rm --name mayapp -p 80:8000 my-app
# 80 host-port
# 8000 container-port
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
5ba7b461e92e my-app "python ./app/aliens…" 3 seconds ago Up 1 second 0.0.0.0:80->8000/tcp mayapp
http://localhost/docs

GCP Cloud Run deployment failure

I am trying to execute a basic GCP Cloud Run example.
Code it self is very simple:
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
name = os.environ.get("NAME", "World")
return "Hello {} This is our first application !".format(name)
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Then I tried a deployment using Cloud Code/Deploy on Cloud Run
Here is the result I get when I hit Deploy:
"Failed to build the app. Error: Build Failed. No push access to specified image repository. Try running with --default-repo flag."
Note that I am using Artifact-Registry, repo creation works fine
Docker file content is this:
FROM python:3.9-slim
ENV PYTHONUNBUFFERED True
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN pip install Flask gunicorn
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
Anybody face this issue ?
Any help is appreciated.

run flask and golang server in docker

I am trying to run a go web server and a flask server in the same docker container. I have 1 Docker file to build the flask app. How can I update the Dockerfile to build a container to run both python and golang.
ProjectFolder
pyfolder
/app.py,
Dockerfile
main.go
main.go
package main
import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Println("func called")
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
app.py
from flask import Flask
import os
app = Flask(__name__)
#app.route("/")
def hello():
return "Flask inside Docker!!"
if __name__ == "__main__":
port = int(os.environ.get("PORT", 5000))
app.run(debug=True,host='0.0.0.0',port=port)
Dockerfile
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
Since you have two separate programs, you would typically run these in two containers, with two separate images. In both cases you can use a basic Dockerfile for their respective languages:
# pyfolder/Dockerfile
FROM python:3.6
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt .
COPY . .
EXPOSE 5000
CMD ["./app.py"]
# ./Dockerfile
FROM golang:1.16-alpine AS build
COPY go.mod go.sum .
RUN go mod download
COPY main.go .
RUN go build -o main main.go
FROM alpine
COPY --from=build /go/main /usr/bin/main
EXPOSE 8080
CMD ["main"]
You don't discuss at all why you have two containers or how they communicate. You'll frequently want containers to have no local state at all if they can manage it, and communicate only over network interfaces like HTTP. This means you'll need some way to configure the network address one service uses to call another, since it will be different running in a local development environment vs. running in containers (vs. deployed to the cloud, vs. running in Kubernetes, vs....) An environment variable would be a typical approach; say the Go code needs to call the Python code:
url := os.Getenv("PYAPP_URL")
if url == "" {
url = "http://localhost:8080"
}
resp, err := http.Get(url)
A typical tool to run multiple containers together would be Docker Compose. It's not the only tool, but it's a standard part of the Docker ecosystem and it's simpler than many of the alternatives. You'd write a YAML file describing the two containers:
version: '3.8'
services:
pyapp:
build: ./pyfolder
server:
build: . # (the directory containing the Go code and its Dockerfile)
environment:
- PYAPP_URL=http://pyapp:5000
ports:
- '8080:8080'
depends_on:
- pyapp
Running docker-compose up --build will build the two images and start the two containers.
You should be able to accomplish this using a multistage build in your Dockerfile.
First, move the Dockerfile to the root directory next to the main.go.
Next, modify your Dockerfile to
# grab the golang image
FROM golang:1.16
WORKDIR .
RUN go build -o app .
# python portion
FROM python:3.6
WORKDIR /app
# copy the go binary from stage 0
COPY --from=0 app ./
# start the go binary
CMD ["sh", "app"]
# start python
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
I do think The Fool is correct in suggesting to separate these into two different containers. I would look into docker-compose instead of using a multi-stage build for this use.
Docker Multistage Build Reference

Docker: Running a Flask app via Gunicorn - Worker timeouts? Poor performance?

I am trying to create a new app that is written in Python Flask, run by gunicorn and then dockerised.
The problem I have is the performance inside the docker container is very poor, inconsistent and I do eventually get a response but I can't understand why the performance is decreasing. Sometimes I see in the logs [CRITICAL] WORKER TIMEOUT (pid:9).
Here is my app:
server.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
return "The server is running!"
if __name__ == '__main__':
app.run()
Dockerfile
FROM python:3.6.9-slim
# Copy all the files to the src folder
COPY build/ /usr/src/
# Create the virtual environment
RUN python3 -m venv /usr/src/myapp_venv
# Install the requirements
RUN /usr/src/myapp_venv/bin/pip3 install -r /usr/src/requirements.txt
# Runs gunicorn
# --chdir sets the directory where gunicorn should look for the server files
# server:app means run the "server.py" file and look for the "app" constructor within that
ENTRYPOINT ["/usr/src/myapp/bin/gunicorn", "--bind", "0.0.0.0:5000", "--workers", "1", "--chdir", "/usr/src/", "server:app"]
# Expose the gunicorn port
EXPOSE 5000
requirements.txt
Click==7.0
Flask==1.1.1
gunicorn==20.0.0
itsdangerous==1.1.0
Jinja2==2.10.3
MarkupSafe==1.1.1
Werkzeug==0.16.0
I run the docker container like this:
docker build -t killerkode/myapp .
docker run --name myapp -p 5000:5000 killerkode/myapp
I managed to find this helpful article which explains why Gunicorn hangs.
https://pythonspeed.com/articles/gunicorn-in-docker/
The solution for me was to change the worker temp directory and increase the minimum workers to 2. I still see workers being killed off but there is no longer any delays / slowness. I suspect adding in the gthread will improve things further.
Here is my updated Dockerfile:
FROM python:3.6.9-slim
# Copy all the files to the src folder
COPY build/ /usr/src/
# Create the virtual environment
RUN python3 -m venv /usr/src/myapp_venv
# Install the requirements
RUN /usr/src/myapp_venv/bin/pip3 install -r /usr/src/requirements.txt
# Runs gunicorn
# --chdir sets the directory where gunicorn should look for the server files
# server:app means run the "server.py" file and look for the "app" constructor within that
ENTRYPOINT ["/usr/src/myapp/bin/gunicorn", "--bind", "0.0.0.0:5000", "--worker-tmp-dir", "/dev/shm", "--workers", "2", "--chdir", "/usr/src/", "server:app"]
# Expose the gunicorn port
EXPOSE 5000

Categories

Resources