Authentication on NGINX aganist Tornado - python

well, I just setup NGINX and now its working.
As my BackEnd WebServer under NGINX I have Python Tornado running:
I only use NGINX for allow big uploads (large-sized), so one of my URL (for upload) is served on NGINX and the rest of URLs are served by Tornado.
I use Sessions provided by Tornado (running at http://localhost:8080/), and NGINX is running at http://localhost:8888/;
Well this is my nginx config file:
location /images/upload {
upload_pass /after_upload;
.....
.....
.....
}
location /after_upload {
proxy_pass http://localhost:8080/v1/upload/;
}
As you see, there aren't anything about authentication on NGINX.
URL for proxy_pass requiere a session valid (provided by Tornado)
This is scheme of the system is the following:
When users log in the system, system create a Tornado (tornado sessions) session in server and in user's Browser, so I need pass authentication to NGINX and continue this authentication process again in Tornado Service.
How I change NginX for authenticate against Tornado?
Thanks in advance

Well, Nginx works as a Proxy, therefore is not necessary make changes in Tornado or in your application. For my application I just add rewrites from NGINX urls towards Tornado urls. So this includes all traffic (auth, etc.) and all HTTP structures like if you were working in Tornado.
server {
listen 8888; ## listen for ipv4
server_name localhost;
access_log /var/log/nginx/localhost.access.log;
client_max_body_size 100000M;
location / {
#Real Location URL for Tornado.
proxy_pass http://localhost:8080/;
}
}
Key is proxy_pass , where every requests for 8888 port are passed to 8080 port in localhost.
Everything is passed to Tornado BackEnd from Nginx.

Related

Running Python Script Using Nginx and WSGI - Stuck

I'm new to python and have been put on a task of building out a spreadsheet parser. I've created a python script that reads an xlsx file and parses the data. I have an Nginx server set up that this will be hosted on. I need this script to be an API endpoint so I can pass the parsed data back as JSON. I have been reading about WSGI for production server and have tried to follow the route of building that out. I am able to serve a path on the server and have it output the wsgi python script. The script has the following:
def application(environ, start_response):
status = '200 OK'
html = '<html>\n' \
'<body>\n' \
' Hooray, mod_wsgi is working\n' \
'</body>\n' \
'</html>\n'
response_header = [('Content-type','text/html')]
start_response(status, response_header)
return [html]
I'm a little confused as to how to receive a request and send back json with my excel parser class? Thanks and I hope I'm being clear. I do have a flask server that works, but I do not know how to have it constantly running to serve my endpoint:
app = Flask(__name__)
#app.route('/parser/direct_energy', methods=['GET'])
def get_data():
return jsonify(commissions_data)
if name == 'main':
app.run(host='0.0.0.0')
You don't want to use raw WSGI for this.
Use a package such as FastAPI (or Flask) to make everything easier for you.
For instance, using FastAPI, an app with an endpoint to receive a binary (Excel) file and return a JSON response is approximately
from fastapi import FastAPI, File, UploadFile
app = FastAPI()
#app.post("/process")
def process_file(file: UploadFile = File()):
response = my_data_processing_function(data)
return {"response": response}
See:
To get going: https://fastapi.tiangolo.com/tutorial/
To process files: https://fastapi.tiangolo.com/tutorial/request-files/
To deploy your service (behind Nginx): https://fastapi.tiangolo.com/deployment/
I use python/flask for development & gunicorn for production.
To get it to accept HTTP requests, I use function decorators. Its the most common way.
#application.route('/epp/api/v1.0/request', methods=['POST'])
def eppJSON():
if flask.request.json is None:
return abort(400, "No JSON data was POSTed")
return jsonRequest(flask.request.json, flask.request.remote_addr)
So here, the url /epp/api/v1.0/request accepts POSTed JSON and returns JSON
When you run flask in dev mode it listens on http://127.0.0.1:5000
https://github.com/james-stevens/epp-restapi/blob/master/epprest.py
https://github.com/james-stevens/dnsflsk/blob/master/dnsflsk.py
These are both python/flask projects of mine. Feel free to copy. They each run multiple instances of the python code in a single container load-balanced by nginx - pretty neat combination.
UPDATE
I got things working throug NGinx, flask, and GUnicorn. However, my flask app is only working when I go to '/'. If I go to a route such as /parser/de/v1 I get a 404 Not Found.
Here is my setup for NGinx:
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html/excel_parser;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name 208.97.141.147;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://127.0.0.1:5000;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
my nginx.conf looks slightly different, partly becuase I am running multiple WSGI instances, then getting nginx to load-balance over them
worker_processes 3;
events {
worker_connections 1024;
}
user daemon;
http {
access_log off;
error_log stderr error;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream dns_servers {
server unix:/ram/dnsflsk_1.sock;
server unix:/ram/dnsflsk_2.sock;
server unix:/ram/dnsflsk_3.sock;
}
server {
listen 800 ssl;
server_name localhost;
ssl_certificate certkey.pem;
ssl_certificate_key certkey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://dns_servers;
}
}
}
But with this, all the URLs are passed to the python/wsgi

Getting client IP address in nginx/flask application

This question seems to be asked often, but I have not found a good resolution to the problem I am having.
I have a flask application that is behind nginx. The app and nginx communicate via uwsgi unix socket. The application has a publicly exposed endpoint that is exposed via Route53. It is also exposed via AWS API Gateway. The reason for this dual exposure is that the application is replacing an existing Lambda solution. With the API Gateway, I can support legacy requests until they can transition to the new publicly exposed endpoint. An additional fact about my application, it is running in a Kubernetes pod, behind a load balancer.
I need to get access to the IP address of the client that made the request so I can use geoIP lookups and exclude collection of data for users outside of US (GDPR) among other things. With two paths into the application, I have two different ways to get to the IP address.
Hitting the endpoint from API Gateway
When I come in through the legacy path, I get an X-Forwarded-For, but I am only seeing IP addresses that are registered to Amazon. I was using the first one in the list, but I only see one or two different IP address. This is a test environment, and that may be correct, but I don't think so because when I hit it from my local browser, I do not find my IP.
Directly hitting the endpoint:
In this case, there is no data in the X-Forwarded-For list, and the only ip address I can find is request.remote_addr. This, unfortunately only has the IP address of either the pod, or maybe the load balancer. I'm not sure which as it is in the same class, but matches neither. Regardless, it is definitely not the client IP address. I found documentation in nginx that describes available variables including $realip_remote_addr. However, when I logged that value, it was the same as remote_addr.
The following is the code that I am using to get the remote_addr:
def remote_addr(self, request):
x_forwarded_for = request.headers.get("X-Forwarded-For")
if x_forwarded_for:
ip_list = x_forwarded_for.split(",")
return ip_list[0]
else:
return request.remote_addr
If it is helpful, this is my nginx server config:
server {
listen 8443 ssl;
ssl_certificate /etc/certs/cert;
ssl_certificate_key /etc/certs/key;
ssl_dhparam /etc/certs/dhparam.pem;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
server_tokens off;
location = /log {
limit_except POST {
deny all;
}
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location = /ping {
limit_except GET {
deny all;
}
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location = /health-check {
return 200 '{"response": "Healthy"}';
}
location /nginx_status {
stub_status;
}
}
I have spent over a day trying to sort this out. I am sure that the solution is trivial and is likely caused by lack of knowledge/experience using nginx.
Kubernetes, by default do not preserve the client address in the target container (https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/).
Therefore, I solved that by editing my Nginx ingress configuration, where I changed the externalTrafficPolicy property to local.
...
ports:
- port: 8765
targetPort: 9376
externalTrafficPolicy: Local
type: LoadBalancer
...
However, be aware that if you change that you may have a risk of unbalanced traffic among the pods:
Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading.

How to use Flask get client's port?

I'm trying to build a simple torrent-tracker with Flask, but I meet a problem.
If client in NAPT network, the port, which included in request is incorrect. I want to get client connect port with Flask (In PHP like this function: $_SERVER['REMOTE_PORT']).
How to get client port with Flask?
You can get it from request.environ
request.environ.get('REMOTE_PORT')
If Flask is behind a reverse proxy,
request.environ.get('REMOTE_PORT')
will not give you what you want, you are going to get the port
used by the reverse proxy.
If using Nginx, add this bold line to your config file:
server {
listen 80;
server_name _;
location / {
proxy_pass ...;
proxy_set_header WHATEVER $remote_port;
}
}
Then you can get the client port with :
request.headers.get('WHATEVER')

Setting up Https in Django 1.7+ with uwsgi and Nginx

I'm having trouble setting up my site with https. At the moment, I have my nginx server set to listen to both http and https responses.
However, now I only want to allow https and redirect any http requests to htpps.
I tried this post without any luck: How to deploy an HTTPS-only site, with Django/nginx?
What is the recommended way of doing this in Django 1.7+?
Below is my ngninx.conf file:
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:///uwsgi-tutorial/mysite/mysite.sock; # for a file socket
# server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
listen 443 default_server ssl;
#ssl on;
ssl_certificate /uwsgi-tutorial/conf/www.example.com.chained.crt;
ssl_certificate_key /uwsgi-tutorial/conf/www.example.com.key;
# the domain name it will serve for
# server_name localhost; # substitute your machine's IP address or FQDN
server_name example.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /uwsgi-tutorial/mysite/media; # your Django project's media files - amend as required
}
location /static {
alias /uwsgi-tutorial/mysite/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /uwsgi-tutorial/mysite/uwsgi_params; # the uwsgi_params file you installed
}
}
There is already a library that does this job just fine - sslify:
https://github.com/rdegges/django-sslify
Just proceed with instructions on github page.

How do you append www. to a Django url?

Currently I am using nginx and uWSGI to host my website. I need to append www. to my urls, but I'm not sure what is the best route to take.
Should I be doing this at the nginx level?
Yes, nginx is the most efficient way to prepend (or append) www, though Django provides a settings PREPEND_WWW that does the exact same thing when set to True.
E.g. in your nginx config:
server {
listen 80;
server_name example.com;
return 301 http://www.example.com$request_uri;
}

Categories

Resources