Django REST Swagger HTTPS requests - python

How configure django-rest-swagger to get a HTTPS requests?
upd:
SSL cert is present and ALL app working with it, but swagger make a http requests.

Add this setting in your settings.py,
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
For more details, refer documentation..
Also, you may need to make sure your server is forwarding x_forwarded_proto and on nginx add this to your location within server config:
proxy_set_header X-Forwarded-Protocol $scheme;

Put url='https://your_server_address/', in get_schema_view function in urls.
But swagger now only works on https, if you want to work on both http and https you can handle this through env variables.

All the above solutions didn't work for me, so i did something HORRIBLE that worked:
Edited drf_yasg/openapi.py Line 260
From:
self.schemes = [url.scheme]
To:
self.schemes = ["https"]
Obviously you should not do this because the next time someone installs requirements this change will be lost. But it helped me to get the documentation working on my server.

#zaidfazil's answer almost worked for me. I had to add
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
to my django settings but then I had to add
proxy_set_header X-Forwarded-Proto https;
instead of:
proxy_set_header X-Forwarded-Protocol $scheme;
inside nginx's location block that serves the django app.

That's not an issue with your swagger. Just install an SSL cert on your django-serving app server.

Related

Django - request.is_secure always returns False

I'm running a Django project on a DigitalOcean VPS using Nginx and Gunicorn. I made sure that i'm using HTTPS, but for some reason using request.is_secure() always returns False, and request.scheme returns HTTP, even though i made sure it's VPS.
What could be the reason for that? Here is my nginx config:
server {
listen 80;
server_name MY.SERVER.com;
location / {
include proxy_params;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/var/www/proj/myproj.sock;
}
}
And i also made sure to add to my Django settings SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https'). Any advice is appreciated
I ran into the same issue. And, it looks like, I found out why it doesn't work as expected.
Accordingly to the documentation, $scheme equals either http or https.
In the case of a location, which is declared in server that listens on 80 port, we get $scheme equal to http. Then, AFAIU, Django receives HTTP_X_FORWARDED_PROTO header which equals to http, and Django treats it as unsecured (i.e. request.is_secure() always returns False). Well, at least, it started to work well when I made the following change:
proxy_set_header X-Forwarded-Proto "https";

(Python) Add certificate to Bottle server

I am stuck with a problem for some time and can't find a right solution for it.
I have a python server based on Bottle (Python 3) written with PyCharm. I'm converting my files with "pyinstaller" to one "exe" to start the server on a fixed PC (win7). The server works fine for the things it is needed for but now I want to add more secuity to it.
I have a signed certificate (not self-signed) and a key, which I want to add. I tried to start the server with them but I'm not sure, if I have to do something else with them, because the certificate is not shown on the homepage in the information and the website is still shown as not save.
My normal server is running with:
from bottle import run, ...
...
if __name__ == "__main__":
...
run(host=IP, port=PORT)
I have tried some frameworks for bottle and I end up with cherrypy as the one, that starts my server in a proper way.
The server is running with:
run(host=IP, port=PORT, server='cherrypy', certfile='./static/MyCert.pem', keyfile='./static/key.pem')
It is not working with the current version of cherrypy, so I downgraded it (after some search) to ">=3.0.8, <9.0.0".
The server is running, but the website is still not save. And I don't know if it just does not load the certificate or I miss something else. I tried things like leaving the "keyfile" in the code or adding the key to my certificate, but it does not change anything.
Another framework I tried was gevent:
from gevent import monkey; monkey.patch_all()
...
if __name__ == "__main__":
run(host=IP, port=PORT, reloader=False, server='gevent', certfile='./static/MyCert.pem', keyfile='./static/key.pem')
But here I can't get to the website. When I try, the terminal asks me for the PEM phrase, but I can't add it (or just don't know how) and getting an error I have never seen before:
terminal_error
Like in my cherrypy-example I tried to use some combinations of leaving parts of the code away or changing the certificate but it always ends up here.
It would be nice if someone has a solution for my problem or can give me a hint of what I'm missing or just have not thought of yet. I would like to stay with cherrypy or another framework for Bottle, so I don't have to change much of my current code.
Thanks
P.
It sounds to me like you added a passphrase to your certificate. Regenerate your cert without a passphrase and try again.
Additionally, a word of advice. I highly recommend running your bottle/cherrypy server behind nginx in reverse proxy mode. This simplifies your configuration by letting nginx handle the termination of your SSL session, and then your python web server never needs to know anything about a certificate.
Here's a redacted copy of the nginx config we're using to terminate our (self-signed) SSL cert and reverse proxy our cherrypy site running on localhost on port 9000:
server {
listen example.com:80;
server_name test.example.com;
access_log /var/log/nginx/test.example.com.access.log main;
return 301 https://test.example.com$request_uri;
}
server {
listen example.com:443;
access_log /var/log/nginx/test.example.com.access.log main;
server_name test.example.com;
root /usr/local/www/test.example.com/html;
ssl on;
ssl_certificate /etc/ssl/test.example.com.crt;
ssl_certificate_key /etc/ssl/test.example.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don't use SSLv3 ref: POODLE
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
client_max_body_size 16M;
# Block access to "hidden" files and directories whose names begin with a
# period. This includes directories used by version control systems such
# as Subversion or Git to store control files.
location ~ (^|/)\. {
return 403;
}
location / {
proxy_pass http://127.0.0.1:9000;
proxy_set_header X-REAL-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

flask-security can't work with gunicorn with multiple workers?

I'm writing a website with Flask. I use Flask-Secuirty to do authentication. I use nginx + gunicorn to deploy it.
The configuration of nginx as follow:
server{
listen 80;
server_name project.example.com;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
}
}
And I use gunicorn -w worker_number -k gevent run:app -p app.pid -b 127.0.0.1:5000 to start gunicorn.
If the worker_number is 1, everything is ok.
If the worker_number is greater than 1 like 3, I can't login in with Flask-Security.
The output of server said the post request of login is 200. But the server redirect me to login page again.
After some search, I can't find direct reason of this. And I guess this might cause by SERVER_NAME config of Flask or the misuse of Flask-SQLAlchemy.
Is there anyone has met this situation before? Please give me some advices.
I met a similar problem with flask_login, when the worker_number is greater than 1, I could not login in.
My app.secret_key was set to os.urandom(24), so every worker will have another secret key.
Set app.secret_key to a string solved my problem.
Just use a fixed secret key.
Generate with following command.
$ openssl rand -base64 <desired_length>
If you don't want to hardcode the key inside source code which you shouldn't for security purposes.
Set up an environment variable and get it.
import os
# -- snip --
app.config["SECRET_KEY"] = os.environ.get("FLASK_APP_SECRET_KEY")
Use something like Flask-Session and use Redis as a session store. I'm not sure how Flask-Security works, but I assume it relies on Flask sessions, in which case it would solve the problem of a user session switching between application servers.

Flask app gives ubiquitous 404 when proxied through nginx

I've got a flask app daemonized via supervisor. I want to proxy_pass a subfolder on the localhost to the flask app. The flask app runs correctly when run directly, however it gives 404 errors when called through the proxy. Here is the config file for nginx:
upstream apiserver {
server 127.0.0.1:5000;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://apiserver;
proxy_next_upstream error timeout http_502;
proxy_buffering off;
}
For instance, when I go to http://127.0.0.1:5000/me, I get a valid response from the app. However when I go to http://127.0.0.1/api/me I get a 404 from the flask app (not nginx). Also, the flask SERVER_NAME variable is set to 127.0.0.1:5000, if that's important.
I'd really appreciate any suggestions; I'm pretty stumped! If there's anything else I need to add, let me know!
I suggest not setting SERVER_NAME.
If SERVER_NAME is set, it will 404 any requests that don't match the value.
Since Flask is handling the request, you could just add a little bit of information to the 404 error to help you understand what's passing through to the application and give you some real feedback about what effect your nginx configuration changes cause.
from flask import request
#app.errorhandler(404)
def page_not_found(error):
return 'This route does not exist {}'.format(request.url), 404
So when you get a 404 page, it will helpfully tell you exactly what Flask was handling, which should help you to very quickly narrow down your problem.
I ran into the same issue. Flask should really provide more verbose errors here since the naked 404 isn't very helpful.
In my case, SERVER_NAME was set to my domain name (e.g. example.com).
nginx was forwarding requests without the server name, and as #Zoidberg notes, this caused Flask to trigger a 404.
The solution was to configure nginx to use the same server name as Flask.
In your nginx configuration file (e.g. sites_available or nginx.conf, depending on where you're defining your server):
server {
listen 80;
server_name example.com; # this should match Flask SERVER_NAME
...etc...

Get IP address of visitors using Flask for Python

I'm making a website where users can log on and download files, using the Flask micro-framework (based on Werkzeug) which uses Python (2.6 in my case).
I need to get the IP address of users when they log on (for logging purposes).
Does anyone know how to do this? Surely there is a way to do it with Python?
See the documentation on how to access the Request object and then get from this same Request object, the attribute remote_addr.
Code example
from flask import request
from flask import jsonify
#app.route("/get_my_ip", methods=["GET"])
def get_my_ip():
return jsonify({'ip': request.remote_addr}), 200
For more information see the Werkzeug documentation.
Proxies can make this a little tricky, make sure to check out ProxyFix (Flask docs) if you are using one. Take a look at request.environ in your particular environment. With nginx I will sometimes do something like this:
from flask import request
request.environ.get('HTTP_X_REAL_IP', request.remote_addr)
When proxies, such as nginx, forward addresses, they typically include the original IP somewhere in the request headers.
Update See the flask-security implementation. Again, review the documentation about ProxyFix before implementing. Your solution may vary based on your particular environment.
Actually, what you will find is that when simply getting the following will get you the server's address:
request.remote_addr
If you want the clients IP address, then use the following:
request.environ['REMOTE_ADDR']
The below code always gives the public IP of the client (and not a private IP behind a proxy).
from flask import request
if request.environ.get('HTTP_X_FORWARDED_FOR') is None:
print(request.environ['REMOTE_ADDR'])
else:
print(request.environ['HTTP_X_FORWARDED_FOR']) # if behind a proxy
I have Nginx and With below Nginx Config:
server {
listen 80;
server_name xxxxxx;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://x.x.x.x:8000;
}
}
#tirtha-r solution worked for me
#!flask/bin/python
from flask import Flask, jsonify, request
app = Flask(__name__)
#app.route('/', methods=['GET'])
def get_tasks():
if request.environ.get('HTTP_X_FORWARDED_FOR') is None:
return jsonify({'ip': request.environ['REMOTE_ADDR']}), 200
else:
return jsonify({'ip': request.environ['HTTP_X_FORWARDED_FOR']}), 200
if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0', port=8000)
My Request and Response:
curl -X GET http://test.api
{
"ip": "Client Ip......"
}
The user's IP address can be retrieved using the following snippet:
from flask import request
print(request.remote_addr)
httpbin.org uses this method:
return jsonify(origin=request.headers.get('X-Forwarded-For', request.remote_addr))
If you use Nginx behind other balancer, for instance AWS Application Balancer, HTTP_X_FORWARDED_FOR returns list of addresses. It can be fixed like that:
if 'X-Forwarded-For' in request.headers:
proxy_data = request.headers['X-Forwarded-For']
ip_list = proxy_data.split(',')
user_ip = ip_list[0] # first address in list is User IP
else:
user_ip = request.remote_addr # For local development
Here is the simplest solution, and how to use in production.
from flask import Flask, request
from werkzeug.middleware.proxy_fix import ProxyFix
app = Flask(__name__)
# Set environment from any X-Forwarded-For headers if proxy is configured properly
app.wsgi_app = ProxyFix(app.wsgi_app, x_host=1)
#app.before_request
def before_process():
ip_address = request.remote_addr
Add include proxy_params to /etc/nginx/sites-available/$project.
location / {
proxy_pass http://unix:$project_dir/gunicorn.sock;
include proxy_params;
}
include proxy_params forwards the following headers which are parsed by ProxyFix.
$ sudo cat /etc/nginx/proxy_params
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
If You are using Gunicorn and Nginx environment then the following code template works for you.
addr_ip4 = request.remote_addr
This should do the job.
It provides the client IP address (remote host).
Note that this code is running on the server side.
from mod_python import apache
req.get_remote_host(apache.REMOTE_NOLOOKUP)
I did not get any of the above work with Google Cloud App Engine. This worked, however
ip = request.headers['X-Appengine-User-Ip']
The proposed request.remote_addr did only return local host ip every time.

Categories

Resources