I tried adding the websockets example project to the datastore project and the websockets work but when a page queries the datastore or tries to put a new entity I get a 502 response. In the logs it shows a critical error on the service worker. If I remove the websocket code the datastore code works as intended. The only difference I can see is the entrypoints for the app samples slightly differ
the websocket sample uses
entrypoint: gunicorn -b :$PORT -k flask_sockets.worker main:app
while the datastore sample uses
entrypoint: gunicorn -b :$PORT main:app
websocket sample https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/appengine/flexible/websockets
datastore sample
https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/appengine/flexible/datastore
The problem appears to be that GRPC (the default transport mechanism of the Cloud Datastore client) is not compatible with gevent. Aside from using a different websockets framework, you can work around the issue by activating grpc's gevent compatibility patch, using the following code:
import grpc.experimental.gevent as grpc_gevent
grpc_gevent.init_gevent()
As a complement of Andrew's answer, you can extend the gunicorn worker class to run gRPC applications.
# gevent_grpc_worker.py
from gunicorn.workers.ggevent import GeventWorker
from grpc.experimental import gevent
class GeventGrpcWorker(GeventWorker):
def patch(self):
super(GeventGrpcWorker, self).patch()
gevent.init_gevent()
self.log.info('patched grpc')
# config.py for gunicorn
import multiprocessing
from gevent_grpc_worker import GeventGrpcWorker
# http://docs.gunicorn.org/en/stable/design.html#how-many-workers
workers = multiprocessing.cpu_count() * 2 + 1
worker_connections = 10000
# Use an asynchronous worker as most of the work is waiting for websites to load
worker_class = '.'.join([GeventGrpcWorker.__module__,
GeventGrpcWorker.__name__])
timeout = 30
Then start your managed application by:
gunicorn -c config.py app:app
As you said it seems there is a problem with the flask_socket.worker, I have test it and it does not work with the datastore client.
I have tried with the Flask-SocketIO framework using the eventlet worker and the datastore queries work fine.
entrypoint: gunicorn -b :$PORT --worker-class eventlet -w 1 main:app
Also you need to add the eventlet module in the requirements.txt file eventlet==0.24.1
The downside of this is that it breaks the compatibility with the websocket code so you need to rewrite this part. Keep in mind that code samples are just intended to show in a few lines how to use the Google Cloud products and copy-paste them without modifying the configuration undelied in the app.yaml is not a good idea.
Related
I have a FastAPI application deployed on DigitalOcean, it has multiple API endpoints and in some of them, I have to run a scraping function as a background job using the RQ package in order not to keep the user waiting for a server response.
I've already managed to create a Redis database on DigitalOcean and successfully connect the application to it, but I'm facing issues with running the RQ worker.
Here's the code, inspired from RQ's official documentation :
import redis
from rq import Worker, Queue, Connection
listen = ['high', 'default', 'low']
#connecting to DigitalOcean's redis db
REDIS_URL = os.getenv('REDIS_URL')
conn = redis.Redis.from_url(url=REDIS_URL)
#Create a RQ queue using the Redis connection
q = Queue(connection=conn)
with Connection(conn):
worker = Worker([q], connection=conn) #This instruction works fine
worker.work() #The deployment fails here, the DigitalOcean server crashes at this instruction
The worker/job execution runs just fine locally but fails in DO's server
To what could this be due? is there anything I'm missing or any kind of configuration that needs to be done on DO's endpoint?
Thank you in advance!
I also tried to use FastAPI's BackgroundTask class. At first, it was running smoothly but the job stops running halfway through with no feedback on what was happening in the background from the class itself. I'm guessing it's due to a timeout that doesn't seem to have a custom configuration in FastAPI (perhaps because its background tasks are meant to be low-cost and fast).
I'm also thinking of trying Celery out, but I'm afraid I would run into the same issues as RQ.
Create a configuration file using this command:
sudo nano /etc/systemd/system/myproject.service
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=user
Group=www-data
WorkingDirectory=/home/user/myproject
Environment="PATH=/home/user/myproject/myprojectvenv/bin"
ExecStart=/home/user/myproject/myprojectvenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app
[program:rq_worker]
command=/home/user/myproject/myprojectvenv/bin/rq -A rq_worker -l info
directory=/home/user/myproject
autostart=true
autorestart=true
stderr_logfile=/var/log/celery.err.log
stdout_logfile=/var/log/celery.out.log
[Install]
WantedBy=multi-user.target
I am working on a FastAPI web application and using Uvicorn ASGI server. Now, want to configure server stats in Uvicorn but have not found a reference regarding.
Ex - As like uWSGI Stats Server provides stats -
uwsgi --socket :3031 --stats :1717 --module welcome
So, my question is Does Uvicorn supports the stats server mechanism? or, Is there any other way to achieve this?
No, there is an open issue on uvicorn for this. Check this comment for details https://github.com/encode/uvicorn/issues/610#issuecomment-611987371
I am trying to run a aiohttp based server using Gunicorn.
Here is the command:
gunicorn aiohttpdemo_polls:app --bind 127.0.0.1:8080
It returns:
Failed to find application: 'aiohttpdemo_polls'
But when I am running it using python -m like below:
python -m aiohttpdemo_polls
It works fine. The code can be found from here which is a demo app in the aiohttp repo.
Also tried it like below:
gunicorn aiohttpdemo_polls.main:app --bind 127.0.0.1:8080
But its also not running the server. It returns
Failed to find application: 'aiohttpdemo_polls.main'
Any idea where to look further for fixing the issue?
aiohttp 3.1 supports coroutine as application factory, such as:
async def my_web_app():
app = web.Application()
app.router.add_get('/', index)
return app
Current implementation of aiohttpdemo_polls uses this approach. It can be started with
gunicorn aiohttpdemo_polls.main:init_app --bind localhost:8080 --worker-class aiohttp.GunicornWebWorker
The demo does not support gunicorn yet.
I filed an issue: https://github.com/aio-libs/aiohttp-demos/issues/10
Thanks for report.
I have a simple flask script that uses requests to make http requests to third party web service. The way I run the script in gunicorn is
gunicorn abc:APP -b 0.0.0.0:8080 -w 4 -k gevent --timeout 30 --preload
However, after I upgrade the code to python 3.6.2, I can still run the server, but whenever the webserver received a request, it shows
RecursionError: maximum recursion depth exceeded while calling a Python object
on every worker, and the server seems are still running. When I change the running command to
gunicorn abc:APP -b 0.0.0.0:8080 -w 4 --timeout 30 --preload
It all works again. So is there any issue with gunicorn's async worker and requests in python 3.6.2? Is there a way to fix this?
(This question is also asked at https://github.com/benoitc/gunicorn/issues/1559)
This is because of the ssl is imported before monkey patch, gunicorn import the ssl module(config.py in gevent) when loading config,however the monkey patch is called when init the worker or at the top of app.py file which is definitely after import of ssl, this is why we get the warning.
A simple solution is to use a config file for gunicorn.
we could do gevent monkey patch at the beginning of config file, and start gunicorn with the config file, in this way, the monkey patch could be completed before import ssl, and thus avoid this problem.
A config file named gunicorn_config.py could contain lines below:
import gevent.monkey
gevent.monkey.patch_all()
workers = 8
and then we could start gunicorn with
gunicorn --config config.py --worker-class gevent --preload -b 0.0.0.0:5000 app:app
More information could be found here
Please see https://github.com/benoitc/gunicorn/issues/1559. This may possibly be fixed in the next version of gunicorn, but unfortunately you may have to stay with python 3.5.2 if you don't want to break gunicorn.
I'm getting started with WSGI and until now, with a little help from some tutorials,I'm making some tests towards Flask with uWSGI in front of it, since that Flask is not a good option for Production environments (doesn't scale well and by default, it answers one request per time - http://flask.pocoo.org/docs/0.12/deploying/) and uWSGI gives flexibility and more reliability, spawning workers and processes. Am I wrong?
Most of the tutorials that I saw until, are pointing about setups with Nginx in front of WSGI, but is it really necessary? What I'm trying to do is just to give a scalable way to deliver requests to my Flask application, something with more performance and scalability.
So I have this basic setup:
hello.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8080)
wsgi.py
from hello import app
if __name__ == "__main__":
app.run()
Running uWSGI:
uwsgi --socket 0.0.0.0:8080 --plugin python --wsgi-file wsgi.py --callable app --master --processes 4 --threads 2 &
When I perform a curl against the loopback address, I receive an empty reply..
curl http://127.0.0.1:8080
invalid request block size: 21573 (max 4096)...skip
curl: (52) Empty reply from server
Forgive me, but I can't see what I'm missing. Does anyone here, more experienced with WSGI, could point where is the failure of this setup? Any help would me much appreciated.
Reference documents:
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-16-04
http://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html
Your option, socket should be used when you combine uwsgi with web server (nginx for example). Otherwise you should use http, so
uwsgi --http 0.0.0.0:8080 --plugin python --wsgi-file wsgi.py --callable app --master --processes 4 --threads 2
will work.
Production environments (doesn't scale well and by default, it answers
one request per time - http://flask.pocoo.org/docs/0.12/deploying/)
and uWSGI gives flexibility and more reliability, spawning workers and
processes. Am I wrong?
You are right.
Most of the tutorials that I saw until, are pointing about setups with
Nginx in front of WSGI, but is it really necessary? What I'm trying to
do is just to give a scalable way to deliver requests to my Flask
application, something with more performance and scalability.
Well, nginx is designed to be in front and having it is much better then only application server (uwsgi). Specialisation, that's the key. Let your application server focus on bussiness processing and python.