I've a setup with nginx at front, reverse proxying the requests to gunicorn running at port 8000. For some reason nginx reverse proxying not forwarding requests to gunicorn. I haven't touched the nginx.conf and conf.d folder is empty. I removed the default configuration in sites-available directory. I created my won with the follwing content.
server {
listen 80;
# listen [::]:80 ipv6only=on;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
and python app's content
# server.py
import flask
app = flask.Flask(__name__)
#app.route('/')
def index():
return 'I am running !'
if __name__ == '__main__':
app.run(host='127.0.0.1')
For host in nginx configuration and in python app I've used 127.0.0.1, 0.0.0.0 and 193.162.144.136 (actual) entries but none of them work.
I'm getting the welcome page for nginx on port '80' but unable to get the output of app.
There are no errors in nginx log and if I manually visit port 8000 it does shows the app content.
I'm running app in gunicorn with following command gunicorn server:app.
Any help in this matter would be appreciated.
Related
I see there are many online tutorials, but I am entirely out of luck in getting my application running correctly.
This is a portion of my Flask app :
import ssl
context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain('/etc/ssl/certs/nginx-selfsigned.crt', '/etc/ssl/private/nginx-selfsigned.key')
...........
...........
if __name__ == "__main__":
#from debugger import initialize_flask_server_debugger_if_needed
app.run(port=5000, debug = True, ssl_context=context)
I deployed the application to a Ubunto server running version 20.
I install Nginx in the Ubuntu server and set the config file as follows (only config file I found at least doing some redirection ) :
server {
listen 443 ssl;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
server_name 10.11.238.58;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name 127.0.0.1;
return 302 https://$server_name$request_uri;
}
server {
listen 5001;
server_name 127.0.0.1;
return 302 https://$server_name$request_uri;
}
I created the certificate and key using the tutorial at the DigitalOcen site.
Now when I type '10.11.238.58', the browser URL changes to 'https://127.0.0.1', which tells that some redirection happens. But it should (nginx) send traffic to my Flask app on the Ubuntu server, not the local PC I am trying to browse. Flask App is running at localhost at Ubuntu server.
Any help ?
Following this tutorial, I tried deploying a Flask app with Nginx and Gunicorn. Starting the app with Gunicorn alone and connecting to it works fine. But, after configuring a Unicorn instance to serve the project and configuring Nginx, connecting to the endpoint gives status 502 in the browser console and a permission error in Nginx error.log. I am aware of the similar question in (13: Permission denied) while connecting to upstream:[nginx]. But, given my system is RHEL 7 and I don't understand the commands in the answers, I am not sure of which commands to run. The configurations and error messages can be found below. Help me if you can identify the problem.
/etc/nginx/conf.d/default.conf
server {
listen 9000;
server_name localhost;
location /fasttext{
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/home/ec2-user/projects/vectorbot/vectorbot.sock;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Flask app
import plyvel
from flask import Flask
from flask_restful import Api
from laserembeddings import Laser
from getters.getIntents import *
from getters.getEntities import *
app = Flask(__name__)
api = Api(app)
path_to_bpe_codes = 'data/laser_models/93langs.fcodes'
path_to_bpe_vocab = 'data/laser_models/93langs.fvocab'
path_to_encoder = 'data/laser_models/bilstm.93langs.2018-12-26.pt'
laser = Laser(path_to_bpe_codes, path_to_bpe_vocab, path_to_encoder)
#app.route('/fasttext/lang/si/<keylist>', methods=['GET'])
def get_si(keylist):
return 'Success'
# Initialize and start the web application
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)
wsgi.py
from expose import app
if __name__ == "__main__":
app.run()
/etc/systemd/system/vectorbot.service
[Unit]
Description=Gunicorn instance to serve vectorbot
After=network.target
[Service]
User=ec2-user
Group=ec2-user
WorkingDirectory=/home/ec2-user/projects/vectorbot
Environment="PATH=/home/ec2-user/projects/vectorbot/venv/bin"
ExecStart=/home/ec2-user/projects/vectorbot/venv/bin/gunicorn --workers 3 --bind unix:vectorbot.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
Service starts successfully without errors
In /var/log/nginx/error.log
2020/03/11 12:10:28 [crit] 13499#13499: *291 connect() to unix:/home/ec2-user/projects/vectorbot/vectorbot.sock failed (13: Permission denied) while connecting to upstream, client: #######, server: localhost, request: "GET /fasttext/lang/si/i%20have%20a%20question HTTP/1.1", upstream: "http://unix:/home/ec2-user/projects/vectorbot/vectorbot.sock:/fasttext/lang/si/i%20have%20a%20question", host: "#######"
In browser
Status code 502
This problem has admittedly stumped me for months. I've just procrastinated fixing other bugs and putting this aside until now where it HAS to be fixed --
I am trying to run 2 separate gunicorn apps and start nginx within the same supervisord.conf file. When I start supervisor, I am able to successfully run the handlecalls app but when I go to the website that commentbox is responsible for loading, I get an internal service error (500).
When I run the handlecalls and commentbox apps separately with the commands following the command field, the apps run fine. Why is the commentbox program giving me a 500 error when I try to run both with supervisord?
my supervisord script:
[program:nginx]
directory = /var/www/vmail
command = service nginx start -g "daemon off;"
autostart = True
[program:commentbox]
directory = /var/www/vmail
command = gunicorn app:app -bind 0.0.0.0:8000
autostart = True
[program:handlecalls]
directory = /var/www/vmail
command = gunicorn handle_calls:app --bind 0.0.0.0:8000
autostart = True
[supervisord]
directory = /var/www/vmail
logfile = /var/www/vmail/supervisorerrs.log
loglevel = trace
This has nothing to do with supervisord. Supervisord is just a way for you to start/stop/restart your server. This has more to do with your server's configuration.
The basic: To serve two gunicorn apps with nginx, you have to run them on two different ports, then config nginx to proxy_pass the request to their respective ports. The reson is: once a process is running on a port, that port cannot be used by another process.
So change the configuration in your supervisord script to:
[program:commentbox]
directory = /var/www/vmail
command = gunicorn app:app --bind 0.0.0.0:8000
autostart = True
[program:handlecalls]
directory = /var/www/vmail
command = gunicorn handle_calls:app --bind 0.0.0.0:8001
autostart = True
Then in your nginx server's configuration for handlecalls
proxy_pass 127.0.0.1:8081
Update: Here is the basics of deploying a web application
As mentioned above, one port can only be listened by a process.
You can use nginx as a http server, listening to port 80 (or 443 for https), then passing the request to other applications listening to other ports (for example, commentbox on port 8000 and handlecalls on port 8001)
You can add rules to nginx as how to serve your application by adding certain server configuration files in /etc/nginx/sites-available/ (by default. It is different in some cases). The rules should specify a way for nginx to know which application it should send the request to, for example:
To reuse the same http port (80), each application should be assigned to a different domain. i.e: commentbox.yourdomain.com for commentbox and handlecalls.yourdomain.com for handlecalls
A way to serve two different apps on the same domain, is for them to serve on different ports. For example: yourdomain.com would serve commentbox and yourdomain.com:8080 would serve handlecalls
A way to serve two different apps on the same domain and the same ports, is for them to serve on two different endpoints. For example yourdomain.com/commentbox would serve commentbox and yourdomain.com/handlecalls would serve handlecalls
After adding configuration files to /etc/nginx/sites-available/, you must symlink those files to /etc/nginx/sites-enabled/, well, to tell nginx that you want to enable them. You can add the files directly to /etc/nginx/sites-enabled/, but I don't recommend it, since it doesn't give you a convenient way to enable/disable your application.
Update: Here is how to config nginx to serve gunicorn applications using two different subdomains:
Add two subdomains commentbox.yourdomain.com and handlecalls.yourdomain.com, and point them both to your server's IP.
Create a a configuration file for commentbox at /etc/nginx/sites-available/commentbox with the following content (edit as fit):
server {
listen 80;
server_name commentbox.yourdomain.com;
root /path/to/your/application/static/folder;
location / {
try_files $uri #app;
}
location #app {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:8000;
}
}
Create a configuration file for handlecalls at /etc/nginx/sites-available/handlecalls with the following content (edit as fit):
server {
listen 80;
server_name handlecalls.yourdomain.com;
root /path/to/your/application/static/folder;
location / {
try_files $uri #app;
}
location #app {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:8001;
}
}
Create symlinks to enable those servers:
sudo ln -s /etc/nginx/sites-available/commentbox /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/handlecalls /etc/nginx/sites-enabled/
Restart nginx to take effect
sudo service nginx restart
I am deploying an application to a server, but I seem to be misunderstanding some basic concepts here. The problem is that I am using gunicorn with port 8001
gunicorn myproj.wsgi:application --bind XXX.XXX.XXX.XXX:8001
Nginx, however, is listening to port 8000, as you can see in the file /etc/nginx/sites-available/myproj:
server {
listen 8000;
server_name XXX.XXX.XXX.XXX;
access_log off;
location /static/ {
root /opt/myproj;
}
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3p 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
So, here is what happens:
When I access XXX.XXX.XXX.XXXX:8001, I get my page, but without any of the static files. I can access the static files by XXX.XXX.XXX.XXX:8000/static/css/mycss.css. However, when I access XXX.XXX.XXX.XXX:8000, I get a 502 - Bad Gateway error.
What am I misunderstanding here? How can I access my page with the static files?
Your problem is happening because you are binding gunicorn to your external ip, but nginx is forwarding to the localhost port. The point is that gunicorn should not be accessible to the outside at all; all requests should go through the nginx reverse proxy.
Bind gunicorn to 127.0.0.1:8001.
The basic scheme when using application servers, like gunicorn is:
[User's web browser] <-> [Web server(Nginx)] <-> [Application server(Gunicorn)]
The web server usually listens on public IP address on port 80, and then forwards the connection to application server, serving as reverse proxy. If you run application server and web server on same host it's common to bind both to "localhost"(IP: 127.0.0.1) and same port, i.e. 8001 in your case. So try binding Gunicorn on 127.0.0.1:8001 as specified in your Nginx configuration.
Note: In case when two servers are running on one machine, it's usually worth connecting them via Unix sockets instead of network sockets for performance reasons.
Hi I am new with nginx server, and I have uploaded my index.py file at /var/www/pyth/index.py ...
I am a little bit confused because in my local I can run freely
python index.py and access http://127.0.0.1:8080
I was wondering how can I do that in nginx, I have run python index.py but I can't access to mysite.com:8080
this is my config in /etc/nginx/sites-available/default
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;`
#root /usr/share/nginx/html;
#index index.php index.py index.html index.htm;
root /var/www/mysite.com;
index index.php index.py index.html index.htm;
# Make site accessible from http://localhost/
server_name mysite.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
# Only for nginx-naxsi used with nginx-naxsi-ui : process denied reques$
#location /RequestDenied {
# proxy_pass http://127.0.0.1:8080;
#}
#error_page 404 /404.html;
...
does anyone has an idea about my case? any help will be appreciated.. thanks in advance
You should set up either a uwsgi (or similar), or a proxy_pass in nginx.
The option with UWSGI is better because it'll use the protocol designed for working with web-servers; though it's a bit harder to set up than just proxying everything via nginx.
proxy_pass
web.py has a web-server just for the development purposes, it shouldn't be used in production environment because it's really slow and inefficient in that case, and using proxy_pass wouldn't be a great idea if you are planning to release it.
With proxy_pass, you leave the 127.0.0.1:8080 server online, and then in nginx (on the same server), set up like that:
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
The proxy_pass option redirects everything to the web.py server at 127.0.0.1:8080, the other ones - redirect the data about the connection (IP of the connected client and the host that was used for the connection on the nginx's side)
UWSGI
Using UWSGI, in short, is like that:
1) install uwsgi using your distro's package manager, or a pip, or using setup.py install.
2) in nginx, set up a server that will pass everything to the UWSGI server:
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:9000;
}
}
3) Then, in your web.py application (let's suppose it's called yourappfile.py), instead of app.run(), use:
app = web.application(urls, globals())
application = app.wsgifunc()
You can still have app.run(), just make sure to put it inside the if __name__ == '__main__' block; and make sure the application = app.wsgifunc() is outside so UWSGI could see it.
Then start a UWSGI server:
uwsgi --http :9090 --wsgi-file yourappfile.py
Take a look at these manuals, it may help you:
UWSGI Quickstart
Web.py running on the nginx uwsgi
Deployment of Web.py Applications Using uWSGI and
Nginx
UWSGI Wiki - Examples