Deploy flask in production with GeventWSGI and Nginx - python

i actually have a rest api written in python with flask and flaskrestful extension.
I use gevent WSGI
def runserver():
api.debug = True
http_server = WSGIServer(('', 5000), api)
http_server.start()
All works like a charm on my machine.
I want go in production on a linux vm,on the internet i searched for hours,i don't choosed mod_wsgi because gevent doesn't work properly with it,so i prefer use nginx.
On the internet i saw flask apps hosted with uWSGI,my answer is i need to use uWSGI?
Even i use geventWSGI in my flask application?
How to work with this?
In case i don't need uWSGI,i only need to config nginx sites to pass the request properly to my flask app?
I'm newbie to all this so i'm a little confused.
Thanks in advance

You can run Uwsgi in Gevent mode http://uwsgi-docs.readthedocs.org/en/latest/Gevent.html and then route all flask requests to it via nginx.
server {
listen 80;
server_name customersite1.com;
access_log /var/log/customersite1/access_log;
location / {
root /var/www/customersite1;
uwsgi_pass 127.0.0.1:3031;
include uwsgi_params;
}
see http://uwsgi-docs.readthedocs.org/en/latest/Nginx.html for more details

Related

nginx how to host both react and django

i have a react frontend imported inside a django backend. communication between the two is done through django-rest-framework. on the react's side, fetching is done through relative paths therefore in my package.json i have added the line:
"proxy": "http://127.0.0.1:8000",
django is hosting react-app locally without problems when i run: python3 manage.py runserver.
on the remote i am trying to use nginx with gunicorn to deploy this app on aws ubuntu instance and run into the problem:
first, i'm running python3 manage.py collectstatic
later, i'm pointing nginx to that static_files for the index.html
success! nginx serves react static files
use gunicorn myapp.wsgi -b 127.0.0.1:8000 to run django
problem! nginx served react files do not fetch anything. fetch does not call for this local path but instead calls public ip of aws instance. also, i cannot simulate get/post requests to the django backend because i think nginx "covers" django's gunicorn generated paths.
please tell how can i connect nginx-served react frontedn to gunicorn run django
my nginx sites-enabled/example
server {
listen 80 default_server;
listen [::]:80 default_server;
root /home/ubuntu/fandigger/frontend/build/;
server_name public_ip_without_port;
location / {
try_files $uri $uri/ =404;
}
}

Apache config on a Synology DS for use with mod_wsgi / Django

I'm started developing a new site using Django. For realistic testing I wanted to run it on a Synology DS212J NAS.
Following the official Synology guides I installed ipkg and with it the mod_wsgi package.
As Next step: Following the standard tutorial I made a virtualenv and installed Django in it. Opening a new project and adjust the settings following to: https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-apache-and-mod_wsgi-on-ubuntu-16-04
I'm able to reach the "Hello World" Site from Django by use of manage.py
As suggested I want to exchange the manage.py through the apache server on the NAS. So I think I should go and edit the apache config files for e.g. define a virtual host...
However I can't localize the files for it, as it seems they where moved at DSM6 (which I use) in comparison too other guides.
Where need I to enter the values following the Tutorial on the Synology?
As I'm quite new into the topic do I need to especially load the mod_wsgi module for Apache and if where?
Is it a good idea to use the basic mode of wsgi instead of the daemon mode? I'm not sure which Django modules will be used later on in development...
Thanks for the support!
Activate the python 3 package and the webstation
In webstation> general settings> main server http enable nginx
In Control Panel> Network> DSM Settings> Enable Custom Domain: "test"
(which will allow us to access the nas by entering test.local and simplify the task later.)
Enable ssh connection in control panel> terminal and smtp
We use the ddns service of synology to have external access in our case "test.synology.me"
In control panel> security> certificate : we generate our ssl certificate with let's encrypt
Connect to the nas in ssh
Take root rights sudo -i
Install virtualenv: easy_install virtualenv
We set up our virtual environment: virtualenv -p python3 flasktest
Flask and gunicorn are installed:
pip install flask gunicorn
We create our web application, file: init.py
We launch our web application with gunicorn:
gunicorn --certfile /usr/syno/etc/certificate/system/default/cert.pem --keyfile /usr/syno/etc/certificate/system/default/privkey.pem -b 127.0 .0.1: 5000 app: app
In /etc/nginx/sites-enabled we create a server configuration file, we will use nginx as a proxy, in our case the file will be flasktest.conf
flasktest.conf file:
`
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
gzip on;
server_name test.synology.me;
location / {
proxy_pass https://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_log /volume1/projects/flasktest/logs/error.log;
access_log /volume1/projects/flasktest/logs/acess.log;
}
`
Open the control panel port> external access> router configuration> create> integrate application> enable the check box for webstation and apply
We check our server file for that we enter the command, nginx -t
We are restarting nginx synoservicecfg --restart nginx
You now have access to your python web applications from outside in https ** https: //test.synology.me**
a little more information ...
To end and access your application permanently if you will ever be able to reboot, crash ... you can create a script that will restart gunicorn because otherwise the webstation takes over elsewhere if you enter ip nas locally you do not will not see your web apps in python because we did not modify the main configuration file /etc/nginx/nginx.conf locally so this is the default index.html page of the webstation that will be displayed.
example:
cd / volume1 / projects / flasktest
source bin / activate
gunicorn --certfile /usr/syno/etc/certificate/system/default/cert.pem --keyfile /usr/syno/etc/certificate/system/default/privkey.pem -b 127.0.0.1:5000 app: app
</ dev / null 2> & 1 &
This method found with other python framework

Trouble with using Nginx with django and uwsgi

I follow the steps in http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html but when all the steps done without any error I visit 127.0.0.1:8000, it response with a time-out, my nginx log shows that
upstream timed out (110: Connection timed out) while reading response header from upstream,
By the way, I can access 127.0.0.1:8001 where uwsgi and django works well.
And I can access image in 127.0.0.1:8000/image/1.jpg as well, but just cannot access 127.0.0.1:8000
here's my nginx.conf
upstream django {
server 127.0.0.1:8001;
}
server {
listen 8000;
server_name 127.0.0.1
charset utf-8;
client_max_body_size 75M;
location /media {
alias /home/zhaolei/virtualdjango/bin/mysite/media;
}
location /image {
alias /home/zhaolei/virtualdjango/bin/mysite/image;
}
location / {
uwsgi_pass django;
include /home/zhaolei/virtualdjango/bin/mysite/uwsgi_params;
}
}
I use uwsgi --http 127.0.0.1:8001 --chdir=mysite --module=mysite.wsgi
to run uwsgi. I use uwsgi_params hosts in https://github.com/nginx/nginx/blob/master/conf/uwsgi_params
uWSGI have 2 protocols to communicate with web server. One of them is normal HTTP protocol, that can also be used to communicate directly with clients. But there is also special uwsgi protocol, optimized for communication between HTTP Proxy server and uWSGI.
That protocol is used by nginx when using uwsgi_pass directive, and by uWSGI when you're starting your uWSGI server with --socket param.
If you're starting uWSGI with --http param, uWSGI will use HTTP protocol (that is what you're doing), but if nginx is still using uwsgi_pass it's expecting uWSGI protocol on that socket, not HTTP.
To fix it you have to either change your uwsgi start command to use --socket instead of --http (that's recommended way, but you won't be able to check if uWSGI is functioning properly by entering 127.0.0.8001 directly in your browser, but that's okay: if your command with --http worked properly, there won't be any difference using --socket) or use proxy_pass instead of uwsgi_pass in your nginx config.
And it's described on link that you're mentioned, right here

uwsgi + nginx + flask: upstream prematurely closed

I created an endpoint on my flask which generates a spreadsheet from a database query (remote db) and then sends it as a download in the browser. Flask doesn't throw any errors. Uwsgi doesn't complain.
But when I check nginx's error.log I see a lot of
2014/12/10 05:06:24 [error] 14084#0: *239436 upstream prematurely
closed connection while reading response header from upstream, client:
34.34.34.34, server: me.com, request: "GET /download/export.csv HTTP/1.1", upstream: "uwsgi://0.0.0.0:5002", host: "me.com", referrer:
"https://me.com/download/export.csv"
I deploy the uwsgi like
uwsgi --socket 0.0.0.0:5002 --buffer-size=32768 --module server --callab app
my nginx config:
server {
listen 80;
merge_slashes off;
server_name me.com www.me.cpm;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
server {
listen 443;
merge_slashes off;
server_name me.com www.me.com;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
Is this an nginx or uwsgi issue, or both?
As mentioned by #mahdix, the error can be caused by Nginx sending a request with the uwsgi protocol while uwsgi is listening on that port for http packets.
When in the Nginx config you have something like:
upstream org_app {
server 10.0.9.79:9597;
}
location / {
include uwsgi_params;
uwsgi_pass org_app;
}
Nginx will use the uwsgi protocol. But if in uwsgi.ini you have something like (or its equivalent in the command line):
http-socket=:9597
uwsgi will speak http, and the error mentioned in the question appears. See native HTTP support.
A possible fix is to have instead:
socket=:9597
In which case Nginx and uwsgi will communicate with each other using the uwsgi protocol over a TCP connection.
Side note: if Nginx and uwsgi are in the same node, a Unix socket will be faster than TCP. See using Unix sockets instead of ports.
Change nginx.conf to include
sendfile on;
client_max_body_size 20M;
keepalive_timeout 0;
See self answer uwsgi upstart on amazon linux for full example
In my case, problem was nginx was sending a request with uwsgi protocol while uwsgi was listening on that port for http packets. So either I had to change the way nginx connects to uwsgi or change the uwsgi to listen using uwsgi protocol.
I had the same sporadic errors in Elastic Beanstalk single-container Docker WSGI app deployment. On EC2 instance of the environment upstream configuration looks like:
upstream docker {
server 172.17.0.3:8080;
keepalive 256;
}
With this default upstream simple load test like:
siege -b -c 16 -t 60S -T 'application/json' 'http://host/foo POST {"foo": "bar"}'
...on the EC2 led to availability of ~70%. The rest were 502 errors caused by upstream prematurely closed connection while reading response header from upstream.
The solution was to either remove keepalive setting from the upstream configuration, or which is easier and more reasonable, is to enable HTTP keep-alive at uWSGI's side as well, with --http-keepalive (available since 1.9).
Replace uwsgi_pass 0.0.0.0:5002; with uwsgi_pass 127.0.0.1:5002; or better use unix sockets.
It seems many causes can stand behind this error message. I know you are using uwsgi_pass, but for those having the problem on long requests when using proxy_pass, setting http-timeout on uWSGI may help (it is not harakiri setting).
There are many potential causes and solutions for this problem. In my case, the back-end code was taking too long to run. Modifying these variables fixed it for me.
Nginx:
proxy_connect_timeout, proxy_send_timeout, proxy_read_timeout, fastcgi_send_timeout, fastcgi_read_timeout, keepalive_timeout, uwsgi_read_timeout, uwsgi_send_timeout, uwsgi_socket_keepalive.
uWSGI: limit-post.
I fixed this issue by passing socket-timeout = 65 (uwsgi.ini file) or --socket-timeout=65 (uwsgi command line) option in uwsgi. We have to check with different value depends on the web traffic. This value socket-timeout = 65 in uwsgi.ini file worked in my case.
I fixed this by reverting to pip3 install uwsgi.
I was trying out the setup with Ubuntu and Amazon Linux side by side. I initially used a virtual environment and did pip3 install uwsgi both systems work fine. Later, I did continue the setup with virtual env turned off. On Ubuntu I install using pip3 install uwsgi and on Amazon Linux yum install uwsgi -y. That was the source of the problem for me.
Ubuntu works fine, but not the Amazon Linux
The fix,
yum remove uwsgi and pip3 install uwsgi restart and it works fine.
This issue can also be caused by a mismatch between timeout values.
I had this issue when nginx had a keepalive_timeout of 75s, while the upstream server's value was a few seconds.
This caused the upstream server to close the connection when its timeout was reached, and nginx logged Connection reset by peer errors.
When having such abrupt "connection closed" errors, please check the upstream timeout values are higher than nginx' values (see Raphael's answer for a good list to check)

Relative paths in Flask

Given the configuration below for : Nginx, Uwsgi and Flask.
If I move the Flask application from /test/ to production I must update the Nginx configuration, and preferably only that configuration. So a solution would by if the Flask #app.route('/test/') would be relative, so in a non existing syntax: #app.route('[root]'). I can't find a way to accomplish this. That being said, I presume there 'is' a way because if I must alter all the paths in Flask seems so impracticable.
Nginx:
location /test/ {
uwsgi_pass 127.0.0.01:3031;
include uwsgi_params;
}
Uwsgi:
uwsgi --socket 127.0.0.1:3031 --wsgi-file myflaskapp.py --callable app --proces$
Flask:
from flask import Flask
app = Flask(__name__)
#app.route('/test/')
def index():
return "<span style='color:red'>I am app 1</span>"
I'm trying to accomplish to move my application to any sub-path of the domain (site.com/apps, site.com/congres/, and so forth) and only to update the NGINX configuration.
You're probably thinking of #app.route('/'). The route URL appears to be absolute, but it is actually relative to the root URL of your application.
This is actually covered in Flask's documentation. You only specify the URL to bind your application to in the nginx configuration; Flask should be able to detect this location from the WSGI environment and build its routes accordingly.

Categories

Resources