Getting client IP address in nginx/flask application - python

This question seems to be asked often, but I have not found a good resolution to the problem I am having.
I have a flask application that is behind nginx. The app and nginx communicate via uwsgi unix socket. The application has a publicly exposed endpoint that is exposed via Route53. It is also exposed via AWS API Gateway. The reason for this dual exposure is that the application is replacing an existing Lambda solution. With the API Gateway, I can support legacy requests until they can transition to the new publicly exposed endpoint. An additional fact about my application, it is running in a Kubernetes pod, behind a load balancer.
I need to get access to the IP address of the client that made the request so I can use geoIP lookups and exclude collection of data for users outside of US (GDPR) among other things. With two paths into the application, I have two different ways to get to the IP address.
Hitting the endpoint from API Gateway
When I come in through the legacy path, I get an X-Forwarded-For, but I am only seeing IP addresses that are registered to Amazon. I was using the first one in the list, but I only see one or two different IP address. This is a test environment, and that may be correct, but I don't think so because when I hit it from my local browser, I do not find my IP.
Directly hitting the endpoint:
In this case, there is no data in the X-Forwarded-For list, and the only ip address I can find is request.remote_addr. This, unfortunately only has the IP address of either the pod, or maybe the load balancer. I'm not sure which as it is in the same class, but matches neither. Regardless, it is definitely not the client IP address. I found documentation in nginx that describes available variables including $realip_remote_addr. However, when I logged that value, it was the same as remote_addr.
The following is the code that I am using to get the remote_addr:
def remote_addr(self, request):
x_forwarded_for = request.headers.get("X-Forwarded-For")
if x_forwarded_for:
ip_list = x_forwarded_for.split(",")
return ip_list[0]
else:
return request.remote_addr
If it is helpful, this is my nginx server config:
server {
listen 8443 ssl;
ssl_certificate /etc/certs/cert;
ssl_certificate_key /etc/certs/key;
ssl_dhparam /etc/certs/dhparam.pem;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
server_tokens off;
location = /log {
limit_except POST {
deny all;
}
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location = /ping {
limit_except GET {
deny all;
}
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location = /health-check {
return 200 '{"response": "Healthy"}';
}
location /nginx_status {
stub_status;
}
}
I have spent over a day trying to sort this out. I am sure that the solution is trivial and is likely caused by lack of knowledge/experience using nginx.

Kubernetes, by default do not preserve the client address in the target container (https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/).
Therefore, I solved that by editing my Nginx ingress configuration, where I changed the externalTrafficPolicy property to local.
...
ports:
- port: 8765
targetPort: 9376
externalTrafficPolicy: Local
type: LoadBalancer
...
However, be aware that if you change that you may have a risk of unbalanced traffic among the pods:
Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading.

Related

Flask SERVER_NAME setting best pratices

Since my app has background tasks, I use the Flask context. For the context to work, the Flask setting SERVER_NAME should be set.
When the SERVER_NAME is set the incoming requests are checked to match this value or the route isn't found. When placing an nginx (or other webserver in front), the SERVER_NAME should also include the port and the reverse proxy should handle the rewrite stuff, hiding the port number from the outside world (which it does).
For session cookies to work in modern browsers, the URL name passed by the proxy should be the same as the SERVER_NAME, otherwise the browser refuses to send the cookies. This can be solved by adding the official hostname in the /etc/hosts and setting it to 127.0.0.1.
There is one thing that I haven't figured out yet and it is the URL in the background tasks. url_for() is used with the _external option to generate URLs in the mail it sends out. But that URL includes the port, which is different from the 443 port used by my nginx instance.
Removing the port from the SERVER_NAME makes the stuff described in the first paragraph fail.
So what are my best options for handling the url_for in the mail. Create a separate config setting? Create my own url_for?
You should use url_for(location, _external=True)
or include proxy_params if you use nginx.

(Python) Add certificate to Bottle server

I am stuck with a problem for some time and can't find a right solution for it.
I have a python server based on Bottle (Python 3) written with PyCharm. I'm converting my files with "pyinstaller" to one "exe" to start the server on a fixed PC (win7). The server works fine for the things it is needed for but now I want to add more secuity to it.
I have a signed certificate (not self-signed) and a key, which I want to add. I tried to start the server with them but I'm not sure, if I have to do something else with them, because the certificate is not shown on the homepage in the information and the website is still shown as not save.
My normal server is running with:
from bottle import run, ...
...
if __name__ == "__main__":
...
run(host=IP, port=PORT)
I have tried some frameworks for bottle and I end up with cherrypy as the one, that starts my server in a proper way.
The server is running with:
run(host=IP, port=PORT, server='cherrypy', certfile='./static/MyCert.pem', keyfile='./static/key.pem')
It is not working with the current version of cherrypy, so I downgraded it (after some search) to ">=3.0.8, <9.0.0".
The server is running, but the website is still not save. And I don't know if it just does not load the certificate or I miss something else. I tried things like leaving the "keyfile" in the code or adding the key to my certificate, but it does not change anything.
Another framework I tried was gevent:
from gevent import monkey; monkey.patch_all()
...
if __name__ == "__main__":
run(host=IP, port=PORT, reloader=False, server='gevent', certfile='./static/MyCert.pem', keyfile='./static/key.pem')
But here I can't get to the website. When I try, the terminal asks me for the PEM phrase, but I can't add it (or just don't know how) and getting an error I have never seen before:
terminal_error
Like in my cherrypy-example I tried to use some combinations of leaving parts of the code away or changing the certificate but it always ends up here.
It would be nice if someone has a solution for my problem or can give me a hint of what I'm missing or just have not thought of yet. I would like to stay with cherrypy or another framework for Bottle, so I don't have to change much of my current code.
Thanks
P.
It sounds to me like you added a passphrase to your certificate. Regenerate your cert without a passphrase and try again.
Additionally, a word of advice. I highly recommend running your bottle/cherrypy server behind nginx in reverse proxy mode. This simplifies your configuration by letting nginx handle the termination of your SSL session, and then your python web server never needs to know anything about a certificate.
Here's a redacted copy of the nginx config we're using to terminate our (self-signed) SSL cert and reverse proxy our cherrypy site running on localhost on port 9000:
server {
listen example.com:80;
server_name test.example.com;
access_log /var/log/nginx/test.example.com.access.log main;
return 301 https://test.example.com$request_uri;
}
server {
listen example.com:443;
access_log /var/log/nginx/test.example.com.access.log main;
server_name test.example.com;
root /usr/local/www/test.example.com/html;
ssl on;
ssl_certificate /etc/ssl/test.example.com.crt;
ssl_certificate_key /etc/ssl/test.example.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don't use SSLv3 ref: POODLE
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
client_max_body_size 16M;
# Block access to "hidden" files and directories whose names begin with a
# period. This includes directories used by version control systems such
# as Subversion or Git to store control files.
location ~ (^|/)\. {
return 403;
}
location / {
proxy_pass http://127.0.0.1:9000;
proxy_set_header X-REAL-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

How to use Flask get client's port?

I'm trying to build a simple torrent-tracker with Flask, but I meet a problem.
If client in NAPT network, the port, which included in request is incorrect. I want to get client connect port with Flask (In PHP like this function: $_SERVER['REMOTE_PORT']).
How to get client port with Flask?
You can get it from request.environ
request.environ.get('REMOTE_PORT')
If Flask is behind a reverse proxy,
request.environ.get('REMOTE_PORT')
will not give you what you want, you are going to get the port
used by the reverse proxy.
If using Nginx, add this bold line to your config file:
server {
listen 80;
server_name _;
location / {
proxy_pass ...;
proxy_set_header WHATEVER $remote_port;
}
}
Then you can get the client port with :
request.headers.get('WHATEVER')

How to convert http to https using nginx for local server(self signed cetificate)

I am trying to convert http to https (secure) inside nginx, for that purpose I created and add self signed certificate inside nginx conf file.
server {
listen 80;
server_name www.local.com;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443;
server_name www.local.com;
ssl on;
ssl_certificate /etc/ssl/self_signed_certificate.crt;
ssl_certificate_key /etc/ssl/self_signed_certificate.key;
}
Now, When I give url, it redirects from http to https. and show me messeage in crome "The site security certificate is not Trusted.! ". When I clicked on Proceed anyway. I got "SSL Connection" Error. Its working perfectly on http. How to run my local website using https.
I am using uwsgi server and website is in python - Django. What mistake I did or something else I have to do ? Please Help. Thanks in advance (Sorry for my english).
You are getting the message because you are using a self signed certificate, you need to use a SSL certificate from a trusted provider to avoid the warning. You can get a free SSL cert that is trusted by most major browsers at StartSSL. You can see all of the browsers that are supported here.

Authentication on NGINX aganist Tornado

well, I just setup NGINX and now its working.
As my BackEnd WebServer under NGINX I have Python Tornado running:
I only use NGINX for allow big uploads (large-sized), so one of my URL (for upload) is served on NGINX and the rest of URLs are served by Tornado.
I use Sessions provided by Tornado (running at http://localhost:8080/), and NGINX is running at http://localhost:8888/;
Well this is my nginx config file:
location /images/upload {
upload_pass /after_upload;
.....
.....
.....
}
location /after_upload {
proxy_pass http://localhost:8080/v1/upload/;
}
As you see, there aren't anything about authentication on NGINX.
URL for proxy_pass requiere a session valid (provided by Tornado)
This is scheme of the system is the following:
When users log in the system, system create a Tornado (tornado sessions) session in server and in user's Browser, so I need pass authentication to NGINX and continue this authentication process again in Tornado Service.
How I change NginX for authenticate against Tornado?
Thanks in advance
Well, Nginx works as a Proxy, therefore is not necessary make changes in Tornado or in your application. For my application I just add rewrites from NGINX urls towards Tornado urls. So this includes all traffic (auth, etc.) and all HTTP structures like if you were working in Tornado.
server {
listen 8888; ## listen for ipv4
server_name localhost;
access_log /var/log/nginx/localhost.access.log;
client_max_body_size 100000M;
location / {
#Real Location URL for Tornado.
proxy_pass http://localhost:8080/;
}
}
Key is proxy_pass , where every requests for 8888 port are passed to 8080 port in localhost.
Everything is passed to Tornado BackEnd from Nginx.

Categories

Resources