gunicorn Flask application hangs in Docker - python

I have a Flask application that I run through gunicorn (on a Dell latitude 5410 laptop with Ubuntu 20.04) and I configured the Docker to run it as suggested in this wonderful guide.
Here is my boot.sh:
#!/bin/sh
source venv/bin/activate
while true; do
flask db init
flask db migrate
flask db upgrade
if [[ "$?" == "0" ]]; then
break
fi
echo Deploy command failed, retrying in 5 secs...
sleep 5
done
exec gunicorn -b 0.0.0.0:5000 --access-logfile - --error-logfile - main:app
and the Dockerfile just call it as entrypoint:
FROM python:3.8-alpine
RUN adduser -D diagnosticator
RUN apk add --no-cache bash mariadb-dev mariadb-client python3-dev build-base libffi-dev openssl-dev
WORKDIR /home/diagnosticator
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -U pip
RUN venv/bin/pip install wheel
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn pymysql
COPY app app
COPY upload upload
COPY variant_dependencies variant_dependencies
COPY main.py config.py boot.sh ./
COPY convert_VCF_REDIS.py asilo_variant_functions.py cloud_bigtable_functions.py mongodb_functions.py redis_functions.py ./
COPY docker_functions.py ./
RUN chmod a+x boot.sh
ENV FLASK_APP main.py
RUN chown -R diagnosticator:diagnosticator ./
USER diagnosticator
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
I run the Docker in a docker-network with:
docker network create --driver bridge diagnosticator-net
The problem I noticed is that the Docker container for some reason hangs after a bit of inactivity with gunicorn error:
[CRITICAL] WORKER TIMEOUT
here the docker logs:
192.168.32.1 - - [12/Jun/2021:12:28:30 +0000] "GET /patient_page/GHARIgANY21uuITW2196372 HTTP/1.1" 200 19198 "http://127.0.0.1:3000/patient_result" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36"
[2021-06-12 12:29:10 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:16)
[2021-06-12 12:29:10 +0000] [16] [INFO] Worker exiting (pid: 16)
[2021-06-12 12:29:10 +0000] [17] [INFO] Booting worker with pid: 17
[2021-06-12 12:29:10,905] INFO in __init__: Diagnosticator-local startup
[2021-06-12 12:29:41 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:17)
20
[2021-06-12 12:29:41 +0000] [17] [INFO] Worker exiting (pid: 17)
[2021-06-12 12:29:41 +0000] [18] [INFO] Booting worker with pid: 18
[2021-06-12 12:29:42,094] INFO in __init__: Diagnosticator-local startup
[2021-06-12 12:30:12 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:18)
[2021-06-12 12:30:12 +0000] [18] [INFO] Worker exiting (pid: 18)
and the application just hangs loading forever any page you try to access.
I've seen some suggestions here and googling it, like this one or this one, suggesting to work on gunicorn --timeout but the problem keeps happening no matter what I put there.
Any help will be deeply appreciated cause I cannot figure out any real solution!

Try to run Gunicorn with --log-level debug... It should give some trace about the error.
Plus, you can try to add --worker-class in your gunicorn command

Related

Failed to parse 'app.server' as an attribute name or function call when running docker

I am trying to run my app from docker and I am getting this error message from docker logs for my dash. I have searched a lot but could not find anything. Please help!
The log message from docker logs for dash image
[2020-08-22 06:44:32 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2020-08-22 06:44:32 +0000] [1] [INFO] Listening at: http://0.0.0.0:8050 (1)
[2020-08-22 06:44:32 +0000] [1] [INFO] Using worker: sync
[2020-08-22 06:44:32 +0000] [8] [INFO] Booting worker with pid: 8
Failed to parse 'app.server' as an attribute name or function call.
[2020-08-22 06:44:33 +0000] [8] [INFO] Worker exiting (pid: 8)
[2020-08-22 06:44:33 +0000] [1] [INFO] Shutting down: Master
[2020-08-22 06:44:33 +0000] [1] [INFO] Reason: App failed to load.
My API is working fine
[2020-08-22 06:44:31 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2020-08-22 06:44:31 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
[2020-08-22 06:44:31 +0000] [1] [INFO] Using worker: sync
[2020-08-22 06:44:31 +0000] [8] [INFO] Booting worker with pid: 8
This is my Dash app. File name is app.py inside dash folder.
app = dash.Dash(
__name__,
external_stylesheets=external_stylesheets,
meta_tags=[
{"name": "viewport", "content": "width=device-width, initial-scale=1"}
],
suppress_callback_exceptions=True
)
if __name__ == '__main__':
ENVIRONMENT = os.environ.get("ENVIRONMENT", "dev")
DEBUG = ENVRIONMENT == "dev"
HOST = '0.0.0.0' if ENVIRONMENT == "prod" else 'localhost'
app.run_server(debug=DEBUG, host=HOST)
This is my Dockerfile for Dash
FROM python:3.7
ADD requirements.txt /app/
WORKDIR /app
RUN pip install -r requirements.txt
ADD . /app
EXPOSE 8050
CMD ["gunicorn", "-b", "0.0.0.0:8050", "app:app.server"]
and this is my docker-compose.yml file
version: '3'
services:
api:
build:
context: src/api
dockerfile: Dockerfile
environment:
- ENVIRONMENT=prod
restart: always
dash:
build:
context: src/dash
dockerfile: Dockerfile
ports:
- "8050:8050"
environment:
- ENVIRONMENT=prod
- API_URL=http://api:5000/api
depends_on:
- api
restart: always
This is my api code as app.py file in api folder
app = Flask(__name__)
api = Blueprint('api', __name__)
app.register_blueprint(api, url_prefix='/api')
if __name__ == "__main__":
ENVIRONMENT = os.environ.get("ENVIRONMENT", "dev")
DEBUG = ENVIRONMENT == "dev"
HOST = '0.0.0.0' if ENVIRONMENT == "prod" else 'localhost'
app.run(debug=DEBUG, host=HOST)
This is my Dockerfile for api
FROM python:3.7
ADD requirements.txt /app/
WORKDIR /app
RUN pip install -r requirements.txt
ADD . /app
EXPOSE 5000
CMD ["gunicorn", "-b", "0.0.0.0:5000", "app:app"]
This is what i get after running docker-compose up --build -d
Step 5/7 : ADD . /app
---> 5f1b89f2daeb
Step 6/7 : EXPOSE 5000
---> Running in 74a03831d03e
Removing intermediate container 74a03831d03e
---> cab01af5bfee
Step 7/7 : CMD ["gunicorn", "-b", "0.0.0.0:5000", "app:app"]
---> Running in 606197aae6fd
Removing intermediate container 606197aae6fd
---> 35be6754e766
Successfully built 35be6754e766
Successfully tagged appname_api:latest
Building dash
.
.
.
Step 5/7 : ADD . /app
---> c1111dca7a60
Step 6/7 : EXPOSE 8050
---> Running in ef1a216db216
Removing intermediate container ef1a216db216
---> a84d1d8ce503
Step 7/7 : CMD ["gunicorn", "-b", "0.0.0.0:8050", "app:app.server"]
---> Running in afc121660ddc
Removing intermediate container afc121660ddc
---> 600011a005ca
Successfully built 600011a005ca
Successfully tagged appname_dash:latest
Creating appname_api_1 ... done
Creating appname_dash_1 ... done
The issue is with the arguments you're passing to gunicorn. Based on what you're showing here, I suggest you try changing:
CMD ["gunicorn", "-b", "0.0.0.0:8050", "app:app.server"] to:
CMD ["gunicorn", "-b", "0.0.0.0:8050", "app:server"]
I fixed this error on a coworker's app where they had the following in app.py:
server = Flask(__name__) # define flask app.server
app = dash.Dash(
__name__,
server=server,
)
And then in an index.py they were doing:
from app import app
server = app.server # I add this part here
And then the Docker CMD was:
CMD ["gunicorn", "index:server", "-b", ":8050"]
And it worked.
I had a slightly different error but google led me here so I am posting for others in a similar situation. I had the below error:
Failed to parse '' as an attribute name or function call.
My problem was that in my Procfile I had a space between "wsgi:" and "app". The below was the correct syntax for the Procfile.
web: gunicorn wsgi:app

"[CRITICAL] WORKER TIMEOUT" in logs when running "Hello Cloud Run with Python" from GCP Setup Docs

Following the tutorial here I have the following 2 files:
app.py
from flask import Flask, request
app = Flask(__name__)
#app.route('/', methods=['GET'])
def hello():
"""Return a friendly HTTP greeting."""
who = request.args.get('who', 'World')
return f'Hello {who}!\n'
if __name__ == '__main__':
# Used when running locally only. When deploying to Cloud Run,
# a webserver process such as Gunicorn will serve the app.
app.run(host='localhost', port=8080, debug=True)
Dockerfile
# Use an official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.7-slim
# Install production dependencies.
RUN pip install Flask gunicorn
# Copy local code to the container image.
WORKDIR /app
COPY . .
# Service must listen to $PORT environment variable.
# This default value facilitates local development.
ENV PORT 8080
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 app:app
I then build and run them using Cloud Build and Cloud Run:
PROJECT_ID=$(gcloud config get-value project)
DOCKER_IMG="gcr.io/$PROJECT_ID/helloworld-python"
gcloud builds submit --tag $DOCKER_IMG
gcloud run deploy --image $DOCKER_IMG --platform managed
The code appears to run fine, and I am able to access the app on the given URL. However the logs seem to indicate a critical error, and the workers keep restarting. Here is the log file from Cloud Run after starting up the app and making a few requests in my web browser:
2020-03-05T03:37:39.392Z Cloud Run CreateService helloworld-python ...
2020-03-05T03:38:03.285477Z[2020-03-05 03:38:03 +0000] [1] [INFO] Starting gunicorn 20.0.4
2020-03-05T03:38:03.287294Z[2020-03-05 03:38:03 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
2020-03-05T03:38:03.287362Z[2020-03-05 03:38:03 +0000] [1] [INFO] Using worker: threads
2020-03-05T03:38:03.318392Z[2020-03-05 03:38:03 +0000] [4] [INFO] Booting worker with pid: 4
2020-03-05T03:38:15.057898Z[2020-03-05 03:38:15 +0000] [1] [INFO] Starting gunicorn 20.0.4
2020-03-05T03:38:15.059571Z[2020-03-05 03:38:15 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
2020-03-05T03:38:15.059609Z[2020-03-05 03:38:15 +0000] [1] [INFO] Using worker: threads
2020-03-05T03:38:15.099443Z[2020-03-05 03:38:15 +0000] [4] [INFO] Booting worker with pid: 4
2020-03-05T03:38:16.320286ZGET200 297 B 2.9 s Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/
2020-03-05T03:38:16.489044ZGET404 508 B 6 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/favicon.ico
2020-03-05T03:38:21.575528ZGET200 288 B 6 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/
2020-03-05T03:38:27.000761ZGET200 285 B 5 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/?who=me
2020-03-05T03:38:27.347258ZGET404 508 B 13 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/favicon.ico
2020-03-05T03:38:34.802266Z[2020-03-05 03:38:34 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:4)
2020-03-05T03:38:35.302340Z[2020-03-05 03:38:35 +0000] [4] [INFO] Worker exiting (pid: 4)
2020-03-05T03:38:48.803505Z[2020-03-05 03:38:48 +0000] [5] [INFO] Booting worker with pid: 5
2020-03-05T03:39:10.202062Z[2020-03-05 03:39:09 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:5)
2020-03-05T03:39:10.702339Z[2020-03-05 03:39:10 +0000] [5] [INFO] Worker exiting (pid: 5)
2020-03-05T03:39:18.801194Z[2020-03-05 03:39:18 +0000] [6] [INFO] Booting worker with pid: 6
Note the worker timeouts and reboots at the end of the logs. The fact that its a CRITICAL error makes me think it shouldn't be happing. Is this expected behavior? Is this a side effect of the Cloud Run machinery starting and stopping my service as requests come and go?
Cloud Run has scaled down one of your instances, and the gunicorn arbiter is considering it stalled.
You should add --timeout 0 to your gunicorn invocation to disable the worker timeout entirely, it's unnecessary for Cloud Run.
i was facing the error [11229] [CRITICAL] WORKER TIMEOUT (pid:11232) on heroku
i changed my Procfile to this
web: gunicorn --workers=3 app:app --timeout 200 --log-file -
and it fixed my problem by incresing the --timeout
Here's a working example of a Flask app on Cloud run. My guess is that your last line or the Decker file and the last part of your python file are the ones causing this behavior.
main.py
# main.py
#gcloud beta run services replace service.yaml
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
msg = "Hello World"
return msg
Dockerfile (the apt-get part is not needed)
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python:3.7
# Install manually all the missing libraries
RUN apt-get update
RUN apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils
# Install Python dependencies.
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 main:app
then build using:
gcloud builds submit --tag gcr.io/[PROJECT]/[MY_SERVICE]
and deploy:
gcloud beta run deploy [MY_SERVICE] --image gcr.io/[PROJECT]/[MY_SERVICE] --region europe-west1 --platform managed
UPDATE
I've checked again the logs you've provided.
Getting this kind of warning/error is normal at the beginning after a new deployment as your old instances are not handling any requests but instead they are idle at that time until they are completely shut down.
Gunicorn also has a default timeout of 30s which matches with the time between the time of "Booting worker" and the time you see the error.
for those who are entering here and have this problem but with django (probably it will work the same) with gunicorn, supervisor and nginx, check your configuration in the gunicorn_start file or where you have the gunicorn parameters, in my case I have it like this, in the last line add the timeout
NAME="myapp" # Name of the application
DJANGODIR=/webapps/myapp # Django project directory
SOCKFILE=/webapps/myapp/run/gunicorn.sock # we will communicte using this unix socket
USER=root # the user to run as
GROUP=root # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=myapp.settings # which settings file should Django use
DJANGO_WSGI_MODULE=myapp.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=- \
--timeout 120 #This

Google App Engine python37 is ignoring entrypoint

Since yesterday GAE is ignoring my entrypoint in the app.yaml
My app.yaml:
runtime: python37
entrypoint: gunicorn -k eventlet -b :$PORT main:app
Leads to the following log output:
2019-04-24 07:39:58 default[20190423t203005] [2019-04-24 07:39:58 +0000] [8] [INFO] Starting gunicorn 19.9.0
2019-04-24 07:39:58 default[20190423t203005] [2019-04-24 07:39:58 +0000] [8] [INFO] Listening at: http://0.0.0.0:8081 (8)
2019-04-24 07:39:58 default[20190423t203005] [2019-04-24 07:39:58 +0000] [8] [INFO] Using worker: threads
But the worker should be eventlet not threads.
This is actually an issue with Google App Engine as far as we can tell. Configured entrypoint is being ignored. Incident started yesterday. Our best guess is it's caused by an update to the process by which configuring the gvisor container's entrypoint happens during deploy (may be either App Engine or gVisor broke it).
If you change your logs to show all logs:
you'll see the execution entry point will always be:
Running /bin/sh /bin/sh -c exec gunicorn main:app --workers 1 -c /config/gunicorn.py
We're in the process of filing a ticket. If you can, you should do the same.

Django + Gunincorn - deploy and stay connected to the port?

Correct me if I am wrong: I can use gunicorn to deploy a django project, for instance I can deploy my app - helloapp in this way:
$ cd env
$ . bin/activate
(env) $ cd ..
(env) $ pip install -r requirements.txt
(env) root#localhost:/var/www/html/helloapp# gunicorn helloapp.wsgi:application
[2017-05-18 22:22:38 +0000] [1779] [INFO] Starting gunicorn 19.7.1
[2017-05-18 22:22:38 +0000] [1779] [INFO] Listening at: http://127.0.0.1:8000 (1779)
[2017-05-18 22:22:38 +0000] [1779] [INFO] Using worker: sync
[2017-05-18 22:22:38 +0000] [1783] [INFO] Booting worker with pid: 1783
So now my django site is running at http://127.0.0.1:8000.
But it will not be available anymore as soon as I close/ exit my terminal. So how can I have it stayed connected to the port 8000 even if I have closed my terminal?
As with any long-running process, you need to run it as a service under some kind of manager. Since you're on Ubuntu, you probably want to use systemd; full instructions are in the gunicorn deployment docs. Note, you will also need to configure nginx as a reverse proxy in front of gunicorn.

How do I install Sugyan-Tensorflow-MNIST?

I want to implement Sugyan Tensorflow Mnist in my laptop .It is an implementation of number recognition system.
I am using ubuntu 16.04 LTS.I have all requirements installed from requirement.txt .
What all steps should I go through next?
How to use npm install?
After I execute npm install command in the terminal I get this warning.
aniruddh#Aspire-5742Z:~/Desktop/tensorflow-mnist-master$ npm install
> tensorflow-mnist#1.0.0 postinstall /home/aniruddh/Desktop/tensorflow-mnist-master
> gulp
[11:29:47] Using gulpfile ~/Desktop/tensorflow-mnist-master/gulpfile.js
[11:29:47] Starting 'build'...
[11:29:48] Finished 'build' after 883 ms
[11:29:48] Starting 'default'...
[11:29:48] Finished 'default' after 27 μs
npm WARN tensorflow-mnist#1.0.0 No repository field.
aniruddh#Aspire-5742Z:~/Desktop/tensorflow-mnist-master$gunicorn main:app --log-file=-
[2016-12-15 12:34:49 +0530] [6108] [INFO] Starting gunicorn 19.6.0
[2016-12-15 12:34:49 +0530] [6108] [INFO] Listening at: http://127.0.0.1:8000 (6108)
[2016-12-15 12:34:49 +0530] [6108] [INFO] Using worker: sync
[2016-12-15 12:34:49 +0530] [6111] [INFO] Booting worker with pid: 6111
And after this its just Stuck here.
How do I rectify this?
Thank you for your interest in my repository.
npm install is a command to generate static/js/main.js. Warning messages can be ignored.
If static/js/main.js has been created, just run gunicorn main:app --log-file=- command and access the localhost:8000 in your browser.

Categories

Resources