I'm running a Django project on a DigitalOcean VPS using Nginx and Gunicorn. I made sure that i'm using HTTPS, but for some reason using request.is_secure() always returns False, and request.scheme returns HTTP, even though i made sure it's VPS.
What could be the reason for that? Here is my nginx config:
server {
listen 80;
server_name MY.SERVER.com;
location / {
include proxy_params;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/var/www/proj/myproj.sock;
}
}
And i also made sure to add to my Django settings SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https'). Any advice is appreciated
I ran into the same issue. And, it looks like, I found out why it doesn't work as expected.
Accordingly to the documentation, $scheme equals either http or https.
In the case of a location, which is declared in server that listens on 80 port, we get $scheme equal to http. Then, AFAIU, Django receives HTTP_X_FORWARDED_PROTO header which equals to http, and Django treats it as unsecured (i.e. request.is_secure() always returns False). Well, at least, it started to work well when I made the following change:
proxy_set_header X-Forwarded-Proto "https";
Related
I work on a Django project + django-rest-framework. On localhost, PATCH requests work perfectly fine. However, on server, PATCH requests do not work. I get a 400 Bad request error. I use nginx to configure the web server.
Here is my configuration:
server {
listen 80;
server_name x.y.z.com;
root /var/www/path-to/project;
location / {
error_page 404 = 404;
proxy_pass http://127.0.0.1:5555/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_intercept_errors on;
}
}
I get this error when I try PATCH requests on server :
How can I make so that django accept PATCH requests? The log does not show anything. As if it does not even receives the request. I run the django server like this:
python manage.py runserver 5555
Friend, I faced this problem, I made all the possible settings in nginx, but the problem was in my js fetch that tried with the patch method in lowercase, 'patch', changing to 'PATCH' worked normally.
How configure django-rest-swagger to get a HTTPS requests?
upd:
SSL cert is present and ALL app working with it, but swagger make a http requests.
Add this setting in your settings.py,
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
For more details, refer documentation..
Also, you may need to make sure your server is forwarding x_forwarded_proto and on nginx add this to your location within server config:
proxy_set_header X-Forwarded-Protocol $scheme;
Put url='https://your_server_address/', in get_schema_view function in urls.
But swagger now only works on https, if you want to work on both http and https you can handle this through env variables.
All the above solutions didn't work for me, so i did something HORRIBLE that worked:
Edited drf_yasg/openapi.py Line 260
From:
self.schemes = [url.scheme]
To:
self.schemes = ["https"]
Obviously you should not do this because the next time someone installs requirements this change will be lost. But it helped me to get the documentation working on my server.
#zaidfazil's answer almost worked for me. I had to add
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
to my django settings but then I had to add
proxy_set_header X-Forwarded-Proto https;
instead of:
proxy_set_header X-Forwarded-Protocol $scheme;
inside nginx's location block that serves the django app.
That's not an issue with your swagger. Just install an SSL cert on your django-serving app server.
I have an active site that's hosted on Ubuntu, uses nginx, and the site is written in Python (CherryPy is the server, Bottle is the framework).
I have a shell script that copies python files that I upload over the existing live site ones which then of course results in CherryPy restarting the server so it's running the latest code (how I want it). The problem is, in between the time it's stopping and started a default static page is displayed to any unlucky person who tries to view a page on the site at that time (hope they aren't submitting a form). I've seen this page a bunch of times while updating.
My current setup is two copies of the site running on two ports reverse proxied with nginx. So I figured if I update one, wait a few seconds, then update the other then the site will be up 100% of the time, but this doesn't appear to be the case?
Lets say I have reverse proxy on ports 8095 and 8096, both show the same site but two identical copies of it on the hard drive. I update the python files for port 8095 which causes that port to go down while CherryPy restarts it. Shouldn't everyone then be hitting 8096? It doesn't seem to be working like this. I have an 8 second delay in my file copy script and according to CherryPy logs the 2nd stopped to restart 6 seconds after the 1st was already finished restarting, yet I saw the default static offline page that's displayed when the server is down. I'm confused. According to logs there was always one port up.
Here's part of my nginx.conf:
upstream app_servers {
server 127.0.0.1:8095;
server 127.0.0.1:8096;
}
server {
server_name www.mydomain.com;
listen 80;
error_page 502 503 /offline/offline.html;
location /offline {
alias /usr/share/nginx/html/mysite/1/views/;
}
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
}
}
Try this: From the manual
upstream app_servers {
server 127.0.0.1:8095 max_fails=1 fail_timeout=1;
server 127.0.0.1:8096 max_fails=1 fail_timeout=1;;
}
I'm trying to load balance 2 gunicorn servers with nginx. I am required to have basic auth on the application, so I thought I would stick the auth on the nginx server.
However for some reason my Django completely fails when I enable basic auth the nginx server. Everything works perfectly after disabling basic in my nginx conf.
Here is my nginx conf.
upstream backend {
server 10.0.4.3;
server 10.0.4.4;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_set_header REMOTE_USER $remote_user;
}
location /orders
{
auth_basic "off" ;
}
}
This is the error I'm getting:
Error importing module keystone_auth.backend: "No module named keystone_auth.backend"
I thought it might be some headers that I need to pass through. Is there another way to get basic auth on Django bearing in mind that it needs to be load balanced. Or is my ngnix config missing some stuff?
The keystone_auth.backend had mistakenly be included from another settings file, I was still unable to get BasicAuth working, but eventually solved the issue by writing my own Auth Back end as described here.
https://docs.djangoproject.com/en/dev/topics/auth/customizing/
I've got a flask app daemonized via supervisor. I want to proxy_pass a subfolder on the localhost to the flask app. The flask app runs correctly when run directly, however it gives 404 errors when called through the proxy. Here is the config file for nginx:
upstream apiserver {
server 127.0.0.1:5000;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://apiserver;
proxy_next_upstream error timeout http_502;
proxy_buffering off;
}
For instance, when I go to http://127.0.0.1:5000/me, I get a valid response from the app. However when I go to http://127.0.0.1/api/me I get a 404 from the flask app (not nginx). Also, the flask SERVER_NAME variable is set to 127.0.0.1:5000, if that's important.
I'd really appreciate any suggestions; I'm pretty stumped! If there's anything else I need to add, let me know!
I suggest not setting SERVER_NAME.
If SERVER_NAME is set, it will 404 any requests that don't match the value.
Since Flask is handling the request, you could just add a little bit of information to the 404 error to help you understand what's passing through to the application and give you some real feedback about what effect your nginx configuration changes cause.
from flask import request
#app.errorhandler(404)
def page_not_found(error):
return 'This route does not exist {}'.format(request.url), 404
So when you get a 404 page, it will helpfully tell you exactly what Flask was handling, which should help you to very quickly narrow down your problem.
I ran into the same issue. Flask should really provide more verbose errors here since the naked 404 isn't very helpful.
In my case, SERVER_NAME was set to my domain name (e.g. example.com).
nginx was forwarding requests without the server name, and as #Zoidberg notes, this caused Flask to trigger a 404.
The solution was to configure nginx to use the same server name as Flask.
In your nginx configuration file (e.g. sites_available or nginx.conf, depending on where you're defining your server):
server {
listen 80;
server_name example.com; # this should match Flask SERVER_NAME
...etc...