I want to implement Sugyan Tensorflow Mnist in my laptop .It is an implementation of number recognition system.
I am using ubuntu 16.04 LTS.I have all requirements installed from requirement.txt .
What all steps should I go through next?
How to use npm install?
After I execute npm install command in the terminal I get this warning.
aniruddh#Aspire-5742Z:~/Desktop/tensorflow-mnist-master$ npm install
> tensorflow-mnist#1.0.0 postinstall /home/aniruddh/Desktop/tensorflow-mnist-master
> gulp
[11:29:47] Using gulpfile ~/Desktop/tensorflow-mnist-master/gulpfile.js
[11:29:47] Starting 'build'...
[11:29:48] Finished 'build' after 883 ms
[11:29:48] Starting 'default'...
[11:29:48] Finished 'default' after 27 μs
npm WARN tensorflow-mnist#1.0.0 No repository field.
aniruddh#Aspire-5742Z:~/Desktop/tensorflow-mnist-master$gunicorn main:app --log-file=-
[2016-12-15 12:34:49 +0530] [6108] [INFO] Starting gunicorn 19.6.0
[2016-12-15 12:34:49 +0530] [6108] [INFO] Listening at: http://127.0.0.1:8000 (6108)
[2016-12-15 12:34:49 +0530] [6108] [INFO] Using worker: sync
[2016-12-15 12:34:49 +0530] [6111] [INFO] Booting worker with pid: 6111
And after this its just Stuck here.
How do I rectify this?
Thank you for your interest in my repository.
npm install is a command to generate static/js/main.js. Warning messages can be ignored.
If static/js/main.js has been created, just run gunicorn main:app --log-file=- command and access the localhost:8000 in your browser.
Related
I have a Dash application and I have got some questions for the Azure (App Services) deployment. I use git in Deployment Center.
1) In my requirements.txt I have a packages that is causing the issue - pywin32. It gives me the below error during the deployment:
ERROR: Could not find a version that satisfies the requirement pywin32==302 (from versions: none)
ERROR: No matching distribution found for pywin32==302
It happens during the installation of dependencies.
2) When I remove pywin32==302 from requirements.txt, I can build and deploy however the applications shows me the error (I did before Flask deployment and it worked).
Any ideas how to fix it please?
Logs here:
2021-12-14T14:09:25.051923007Z Updated PYTHONPATH to ':/tmp/8d9bee966ce1769/antenv/lib/python3.8/site-packages'
2021-12-14T14:09:25.497472285Z [2021-12-14 14:09:25 +0000] [37] [INFO] Starting gunicorn 20.1.0
2021-12-14T14:09:25.500486120Z [2021-12-14 14:09:25 +0000] [37] [INFO] Listening at: http://0.0.0.0:8000 (37)
2021-12-14T14:09:25.504178862Z [2021-12-14 14:09:25 +0000] [37] [INFO] Using worker: sync
2021-12-14T14:09:25.507938905Z [2021-12-14 14:09:25 +0000] [39] [INFO] Booting worker with pid: 39
2021-12-14T14:09:26.617892557Z Application object must be callable.
2021-12-14T14:09:26.619256872Z [2021-12-14 14:09:26 +0000] [39] [INFO] Worker exiting (pid: 39)
2021-12-14T14:09:26.677663238Z [2021-12-14 14:09:26 +0000] [37] [INFO] Shutting down: Master
2021-12-14T14:09:26.677730439Z [2021-12-14 14:09:26 +0000] [37] [INFO] Reason: App failed to load.
/home/LogFiles/2021_12_14_pl0sdlwk00000V_docker.log (https://*****.scm.azurewebsites.net/api/vfs/LogFiles/2021_12_14_pl0sdlwk00000V_docker.log)
2021-12-14T14:04:22.384Z INFO - Stopping site ***** because it failed during startup.
2021-12-14T14:09:08.430Z INFO - Starting container for site
2021-12-14T14:09:08.430Z INFO - docker run -d -p 1972:8000 --name taxdevelopment_0_bbeb99e2 -e WEBSITE_SITE_NAME=***** -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=*****.azurewebsites.net -e WEBSITE_INSTANCE_ID=31a267ed7b71ec86982412cc9dc4ad2f31ca2b8f51b692363aa765c405b03b84 appsvc/python:3.8_20210810.1
2021-12-14T14:09:08.431Z INFO - Logging is not enabled for this container.Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2021-12-14T14:09:10.641Z INFO - Initiating warmup request to container *****_0_bbeb99e2 for site *****
2021-12-14T14:09:33.292Z ERROR - Container *****_0_bbeb99e2 for site ***** has exited, failing site start
2021-12-14T14:09:33.294Z ERROR - Container *****_0_bbeb99e2 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
2021-12-14T14:09:33.300Z INFO - Stopping site ***** because it failed during startup.
/home/LogFiles/webssh/.log (https://*****.scm.azurewebsites.net/api/vfs/LogFiles/webssh/.log)
ERROR - Container *****_0_bbeb99e2 for site ***** has exited, failing site start
ERROR - Container *****_0_bbeb99e2 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
The error message shows that failing to start site on linux container. It might be using some additional references of your Dash application. You can use Container logs to view the detailed information of error.
Docker logs appear on the Container Settings page in the portal.
You can find the Docker log in the /LogFiles directory. You can access this via the Kudu (Advanced Tools) Bash console or by using
an FTP client to access it.
You can use our API to download the current logs.
Refer here
Any way you already removed the pywin32 == 302 from requirements.txt and you can check the same pypiwin32==302 in requirements.txt and remove it.
Refer here for more information
I have a Flask application that I run through gunicorn (on a Dell latitude 5410 laptop with Ubuntu 20.04) and I configured the Docker to run it as suggested in this wonderful guide.
Here is my boot.sh:
#!/bin/sh
source venv/bin/activate
while true; do
flask db init
flask db migrate
flask db upgrade
if [[ "$?" == "0" ]]; then
break
fi
echo Deploy command failed, retrying in 5 secs...
sleep 5
done
exec gunicorn -b 0.0.0.0:5000 --access-logfile - --error-logfile - main:app
and the Dockerfile just call it as entrypoint:
FROM python:3.8-alpine
RUN adduser -D diagnosticator
RUN apk add --no-cache bash mariadb-dev mariadb-client python3-dev build-base libffi-dev openssl-dev
WORKDIR /home/diagnosticator
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -U pip
RUN venv/bin/pip install wheel
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn pymysql
COPY app app
COPY upload upload
COPY variant_dependencies variant_dependencies
COPY main.py config.py boot.sh ./
COPY convert_VCF_REDIS.py asilo_variant_functions.py cloud_bigtable_functions.py mongodb_functions.py redis_functions.py ./
COPY docker_functions.py ./
RUN chmod a+x boot.sh
ENV FLASK_APP main.py
RUN chown -R diagnosticator:diagnosticator ./
USER diagnosticator
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
I run the Docker in a docker-network with:
docker network create --driver bridge diagnosticator-net
The problem I noticed is that the Docker container for some reason hangs after a bit of inactivity with gunicorn error:
[CRITICAL] WORKER TIMEOUT
here the docker logs:
192.168.32.1 - - [12/Jun/2021:12:28:30 +0000] "GET /patient_page/GHARIgANY21uuITW2196372 HTTP/1.1" 200 19198 "http://127.0.0.1:3000/patient_result" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36"
[2021-06-12 12:29:10 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:16)
[2021-06-12 12:29:10 +0000] [16] [INFO] Worker exiting (pid: 16)
[2021-06-12 12:29:10 +0000] [17] [INFO] Booting worker with pid: 17
[2021-06-12 12:29:10,905] INFO in __init__: Diagnosticator-local startup
[2021-06-12 12:29:41 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:17)
20
[2021-06-12 12:29:41 +0000] [17] [INFO] Worker exiting (pid: 17)
[2021-06-12 12:29:41 +0000] [18] [INFO] Booting worker with pid: 18
[2021-06-12 12:29:42,094] INFO in __init__: Diagnosticator-local startup
[2021-06-12 12:30:12 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:18)
[2021-06-12 12:30:12 +0000] [18] [INFO] Worker exiting (pid: 18)
and the application just hangs loading forever any page you try to access.
I've seen some suggestions here and googling it, like this one or this one, suggesting to work on gunicorn --timeout but the problem keeps happening no matter what I put there.
Any help will be deeply appreciated cause I cannot figure out any real solution!
Try to run Gunicorn with --log-level debug... It should give some trace about the error.
Plus, you can try to add --worker-class in your gunicorn command
so I'm trying to deploy my Dash app onto Azure using Azure Web App. When I use the "Github Actions" method of deployment, I get the following (I even made my repository public for this):
Package deployment using ZIP Deploy initiated.
##[error]Failed to deploy web package to App Service.
##[error]Deployment Failed with Error: Error: Failed to deploy web package to App Service.
Unauthorized (CODE: 401)
##[warning]Error: Failed to update deployment history.
Unauthorized (CODE: 401)
App Service Application URL: http://my-site.azurewebsites.net
I've also tried deploying with "Kudu App Build" instead of the Github Actions and that "deploys successfully", giving me a status of "Success(Active)" but once I go to my website it fails, then when I go to Diagnosis and solve problems in the azure portal I receive the following error:
2020-06-05T14:32:08.567155259Z
2020-06-05T14:32:08.567159059Z Documentation: http://aka.ms/webapp-linux
2020-06-05T14:32:08.567163159Z Python 3.8.0
2020-06-05T14:32:08.567166959Z Note: Any data outside '/home' is not persisted
2020-06-05T14:32:09.126144120Z Starting OpenBSD Secure Shell server: sshd.
2020-06-05T14:32:09.177989757Z Site's appCommandLine: gunicorn --bind=0.0.0.0 --timeout 600 application:app
2020-06-05T14:32:09.179066899Z Launching oryx with: -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite -userStartupCommand 'gunicorn --bind=0.0.0.0 --timeout 600 application:app'
2020-06-05T14:32:09.289253728Z Oryx Version: 0.2.20200114.13, Commit: 204922f30f8e8d41f5241b8c218425ef89106d1d, ReleaseTagName: 20200114.13
2020-06-05T14:32:09.315423056Z Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it...
2020-06-05T14:32:09.322220623Z Build Operation ID: |mqhAw8K+i4s=.a5925a0c_
2020-06-05T14:32:09.937709104Z Writing output script to '/opt/startup/startup.sh'
2020-06-05T14:32:10.656204831Z Found virtual environment .tar.gz archive.
2020-06-05T14:32:10.656991662Z Removing existing virtual environment directory /antenv...
2020-06-05T14:32:10.667494675Z Extracting to directory /antenv...
2020-06-05T14:32:42.888599159Z Using packages from virtual environment antenv located at /antenv.
2020-06-05T14:32:42.891102058Z Updated PYTHONPATH to ':/antenv/lib/python3.8/site-packages'
2020-06-05T14:32:44.966320399Z [2020-06-05 14:32:44 +0000] [42] [INFO] Starting gunicorn 20.0.4
2020-06-05T14:32:44.973255871Z [2020-06-05 14:32:44 +0000] [42] [INFO] Listening at: http://0.0.0.0:8000 (42)
2020-06-05T14:32:44.974184808Z [2020-06-05 14:32:44 +0000] [42] [INFO] Using worker: sync
2020-06-05T14:32:45.047728498Z [2020-06-05 14:32:45 +0000] [44] [INFO] Booting worker with pid: 44
2020-06-05T14:32:50.315649084Z Application object must be callable.
2020-06-05T14:32:50.321677821Z [2020-06-05 14:32:50 +0000] [44] [INFO] Worker exiting (pid: 44)
2020-06-05T14:32:50.630621560Z [2020-06-05 14:32:50 +0000] [42] [INFO] Shutting down: Master
2020-06-05T14:32:50.630656561Z [2020-06-05 14:32:50 +0000] [42] [INFO] Reason: App failed to load.
That was the application error, here is the container error
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2020-06-05T14:30:22.831Z INFO - Initiating warmup request to container my-site_0_06a70784 for site my-site
2020-06-05T14:30:38.890Z INFO - Waiting for response to warmup request for container my-site_0_06a70784. Elapsed time = 16.0594968 sec
2020-06-05T14:30:54.492Z INFO - Waiting for response to warmup request for container my-site_0_06a70784. Elapsed time = 31.6616936 sec
2020-06-05T14:31:09.663Z ERROR - Container my-site_0_06a70784 for site my-site has exited, failing site start
2020-06-05T14:31:09.695Z ERROR - Container my-site_0_06a70784 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
2020-06-05T14:31:10.217Z INFO - Stopping site my-site because it failed during startup.
2020-06-05T14:31:11.646Z INFO - Starting container for site
2020-06-05T14:31:11.647Z INFO - docker run -d -p 4040:8000 --name my-site_0_9c00d1f9 -e WEBSITE_SITE_NAME=my-site -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=my-site.azurewebsites.net -e WEBSITE_INSTANCE_ID=b78e390e9ef390f579e4e316c4d4ea6c7f187e4af48b856ebd3562fda3c5ef4f appsvc/python:3.8_20200101.1 gunicorn --bind=0.0.0.0 --timeout 600 application:app
Here is what my application.py looks like (I've tried several variations of this to no avail):
import app as application
app = application.app
if __name__ == "__main__":
app.run_server(debug=False)
And at the top of my app.py file I have:
app = dash.Dash(__name__)
This is my requirements.txt:
bs4==0.0.1
dash==1.12.0
dash-table==4.7.0
Flask==1.1.2
numpy==1.18.0
pandas==1.0.4
requests==2.12.4
Lastly, this is the startup command I use in the Configuration->General Settings->Startup Command
gunicorn --bind=0.0.0.0 --timeout 600 application:app
The application you should run when deployment is dash.Dash.server(in your case it's app.server), it's the underly Flask application. So, you have to update the second line of your application.py to this:
app = application.server
See more in Dash docs.
Following the tutorial here I have the following 2 files:
app.py
from flask import Flask, request
app = Flask(__name__)
#app.route('/', methods=['GET'])
def hello():
"""Return a friendly HTTP greeting."""
who = request.args.get('who', 'World')
return f'Hello {who}!\n'
if __name__ == '__main__':
# Used when running locally only. When deploying to Cloud Run,
# a webserver process such as Gunicorn will serve the app.
app.run(host='localhost', port=8080, debug=True)
Dockerfile
# Use an official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.7-slim
# Install production dependencies.
RUN pip install Flask gunicorn
# Copy local code to the container image.
WORKDIR /app
COPY . .
# Service must listen to $PORT environment variable.
# This default value facilitates local development.
ENV PORT 8080
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 app:app
I then build and run them using Cloud Build and Cloud Run:
PROJECT_ID=$(gcloud config get-value project)
DOCKER_IMG="gcr.io/$PROJECT_ID/helloworld-python"
gcloud builds submit --tag $DOCKER_IMG
gcloud run deploy --image $DOCKER_IMG --platform managed
The code appears to run fine, and I am able to access the app on the given URL. However the logs seem to indicate a critical error, and the workers keep restarting. Here is the log file from Cloud Run after starting up the app and making a few requests in my web browser:
2020-03-05T03:37:39.392Z Cloud Run CreateService helloworld-python ...
2020-03-05T03:38:03.285477Z[2020-03-05 03:38:03 +0000] [1] [INFO] Starting gunicorn 20.0.4
2020-03-05T03:38:03.287294Z[2020-03-05 03:38:03 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
2020-03-05T03:38:03.287362Z[2020-03-05 03:38:03 +0000] [1] [INFO] Using worker: threads
2020-03-05T03:38:03.318392Z[2020-03-05 03:38:03 +0000] [4] [INFO] Booting worker with pid: 4
2020-03-05T03:38:15.057898Z[2020-03-05 03:38:15 +0000] [1] [INFO] Starting gunicorn 20.0.4
2020-03-05T03:38:15.059571Z[2020-03-05 03:38:15 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
2020-03-05T03:38:15.059609Z[2020-03-05 03:38:15 +0000] [1] [INFO] Using worker: threads
2020-03-05T03:38:15.099443Z[2020-03-05 03:38:15 +0000] [4] [INFO] Booting worker with pid: 4
2020-03-05T03:38:16.320286ZGET200 297 B 2.9 s Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/
2020-03-05T03:38:16.489044ZGET404 508 B 6 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/favicon.ico
2020-03-05T03:38:21.575528ZGET200 288 B 6 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/
2020-03-05T03:38:27.000761ZGET200 285 B 5 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/?who=me
2020-03-05T03:38:27.347258ZGET404 508 B 13 ms Safari 13 https://helloworld-python-xhd7w5igiq-ue.a.run.app/favicon.ico
2020-03-05T03:38:34.802266Z[2020-03-05 03:38:34 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:4)
2020-03-05T03:38:35.302340Z[2020-03-05 03:38:35 +0000] [4] [INFO] Worker exiting (pid: 4)
2020-03-05T03:38:48.803505Z[2020-03-05 03:38:48 +0000] [5] [INFO] Booting worker with pid: 5
2020-03-05T03:39:10.202062Z[2020-03-05 03:39:09 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:5)
2020-03-05T03:39:10.702339Z[2020-03-05 03:39:10 +0000] [5] [INFO] Worker exiting (pid: 5)
2020-03-05T03:39:18.801194Z[2020-03-05 03:39:18 +0000] [6] [INFO] Booting worker with pid: 6
Note the worker timeouts and reboots at the end of the logs. The fact that its a CRITICAL error makes me think it shouldn't be happing. Is this expected behavior? Is this a side effect of the Cloud Run machinery starting and stopping my service as requests come and go?
Cloud Run has scaled down one of your instances, and the gunicorn arbiter is considering it stalled.
You should add --timeout 0 to your gunicorn invocation to disable the worker timeout entirely, it's unnecessary for Cloud Run.
i was facing the error [11229] [CRITICAL] WORKER TIMEOUT (pid:11232) on heroku
i changed my Procfile to this
web: gunicorn --workers=3 app:app --timeout 200 --log-file -
and it fixed my problem by incresing the --timeout
Here's a working example of a Flask app on Cloud run. My guess is that your last line or the Decker file and the last part of your python file are the ones causing this behavior.
main.py
# main.py
#gcloud beta run services replace service.yaml
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
msg = "Hello World"
return msg
Dockerfile (the apt-get part is not needed)
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python:3.7
# Install manually all the missing libraries
RUN apt-get update
RUN apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils
# Install Python dependencies.
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 main:app
then build using:
gcloud builds submit --tag gcr.io/[PROJECT]/[MY_SERVICE]
and deploy:
gcloud beta run deploy [MY_SERVICE] --image gcr.io/[PROJECT]/[MY_SERVICE] --region europe-west1 --platform managed
UPDATE
I've checked again the logs you've provided.
Getting this kind of warning/error is normal at the beginning after a new deployment as your old instances are not handling any requests but instead they are idle at that time until they are completely shut down.
Gunicorn also has a default timeout of 30s which matches with the time between the time of "Booting worker" and the time you see the error.
for those who are entering here and have this problem but with django (probably it will work the same) with gunicorn, supervisor and nginx, check your configuration in the gunicorn_start file or where you have the gunicorn parameters, in my case I have it like this, in the last line add the timeout
NAME="myapp" # Name of the application
DJANGODIR=/webapps/myapp # Django project directory
SOCKFILE=/webapps/myapp/run/gunicorn.sock # we will communicte using this unix socket
USER=root # the user to run as
GROUP=root # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=myapp.settings # which settings file should Django use
DJANGO_WSGI_MODULE=myapp.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=- \
--timeout 120 #This
Correct me if I am wrong: I can use gunicorn to deploy a django project, for instance I can deploy my app - helloapp in this way:
$ cd env
$ . bin/activate
(env) $ cd ..
(env) $ pip install -r requirements.txt
(env) root#localhost:/var/www/html/helloapp# gunicorn helloapp.wsgi:application
[2017-05-18 22:22:38 +0000] [1779] [INFO] Starting gunicorn 19.7.1
[2017-05-18 22:22:38 +0000] [1779] [INFO] Listening at: http://127.0.0.1:8000 (1779)
[2017-05-18 22:22:38 +0000] [1779] [INFO] Using worker: sync
[2017-05-18 22:22:38 +0000] [1783] [INFO] Booting worker with pid: 1783
So now my django site is running at http://127.0.0.1:8000.
But it will not be available anymore as soon as I close/ exit my terminal. So how can I have it stayed connected to the port 8000 even if I have closed my terminal?
As with any long-running process, you need to run it as a service under some kind of manager. Since you're on Ubuntu, you probably want to use systemd; full instructions are in the gunicorn deployment docs. Note, you will also need to configure nginx as a reverse proxy in front of gunicorn.