Django Fails with Basic Auth with Ngnix + Gunicorn - python

I'm trying to load balance 2 gunicorn servers with nginx. I am required to have basic auth on the application, so I thought I would stick the auth on the nginx server.
However for some reason my Django completely fails when I enable basic auth the nginx server. Everything works perfectly after disabling basic in my nginx conf.
Here is my nginx conf.
upstream backend {
server 10.0.4.3;
server 10.0.4.4;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_set_header REMOTE_USER $remote_user;
}
location /orders
{
auth_basic "off" ;
}
}
This is the error I'm getting:
Error importing module keystone_auth.backend: "No module named keystone_auth.backend"
I thought it might be some headers that I need to pass through. Is there another way to get basic auth on Django bearing in mind that it needs to be load balanced. Or is my ngnix config missing some stuff?

The keystone_auth.backend had mistakenly be included from another settings file, I was still unable to get BasicAuth working, but eventually solved the issue by writing my own Auth Back end as described here.
https://docs.djangoproject.com/en/dev/topics/auth/customizing/

Related

enable cors in a nginx with reverse proxy to strawberry-graphql python app with asgi using daphne

I have a website that has the following setup:
the client is an angular 14 project
the server is a python app with strawberry for graphql
using nginx as web server
using asgi python script to run the app using daphne
I'm having cors related errors when I try to access graphql from the angular app
Access to XMLHttpRequest at 'https://URL/graphql' from origin 'https://URL' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request.
in the nginx server i have listen 443 ssl http2; set with certificates from lets encrypt
I created an upstream setup for the python project:
upstream myproj-server {
server 127.0.0.1:8001;
}
created a location backend:
location #backend {
proxy_pass http://myproj-server; # <--- THIS DOES NOT HAVE A TRAILING '/'
#proxy_set_header 'Access-Control-Allow-Origin' "*";
#proxy_set_header 'Access-Control-Allow-Credentials' 'true';
#proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';
#proxy_set_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With';
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
}
and setup graphql location:
location /graphql {
# add_header 'Access-Control-Allow-Origin' "*" always;
# add_header 'Access-Control-Allow-Credentials' 'true' always;
# add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
# add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
try_files $uri $uri/ #backend;
}
i commented out the CORS lines with # but i did try to enable them, to add cors at the /graphql location or at the proxy configured in the backend, both configurations did not change anything.
next, i have a server.py with the asgi application with the strawberry plugin for the graphql:
from strawberry.asgi import GraphQL
from app import schema
app = GraphQL(schema, graphiql=False)
and i start it with daphne -b 0.0.0.0 -p 8001 server:app
and here I tried to modify server.py to use the Starlette CORS middleware
from strawberry.asgi import GraphQL
from starlette.middleware.cors import CORSMiddleware
from starlette.middleware import Middleware
from starlette.applications import Starlette
from app import schema
middleware = [
Middleware(CORSMiddleware, allow_origins=['*'],
allow_methods=["*"],
allow_credentials=True,
allow_headers=["*"],)
]
graphql_app=GraphQL(schema, graphiql=True)
app = Starlette(middleware=middleware)
app.add_route("/graphql", graphql_app)
app.add_websocket_route("/graphql", graphql_app)
and also here the results are the same
I'm sure that the reverse proxy works properly because if i set graphiql to True and i browse mydomain/graphql it does open the graphql-playground.
so I tried anything i can think of and i'm pretty lots, so any ideas or any information regarding this issue would be greatly appreciated.
I checked the network tab of my browser and I noticed that what the OPTIONS request (for the CORS preflight request) fails with error 301, which is redirect. and then I noticed that the graphql url is mydomain.com/graphql and not www.mydomain.com/graphql and I do redirect to www in my nginx configuration.
i disabled the headers in the nginx i prefer to control it outside of the nginx configuration.
the server.py configuration with starlette did the trick. of course now i'll make it more secured.

Django and nginx do not accept PATCH requests, resulting in a 400 bad request error

I work on a Django project + django-rest-framework. On localhost, PATCH requests work perfectly fine. However, on server, PATCH requests do not work. I get a 400 Bad request error. I use nginx to configure the web server.
Here is my configuration:
server {
listen 80;
server_name x.y.z.com;
root /var/www/path-to/project;
location / {
error_page 404 = 404;
proxy_pass http://127.0.0.1:5555/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_intercept_errors on;
}
}
I get this error when I try PATCH requests on server :
How can I make so that django accept PATCH requests? The log does not show anything. As if it does not even receives the request. I run the django server like this:
python manage.py runserver 5555
Friend, I faced this problem, I made all the possible settings in nginx, but the problem was in my js fetch that tried with the patch method in lowercase, 'patch', changing to 'PATCH' worked normally.

Checking proxy set header forwaded by nginx reverse proxy (Django app)

I'm using nginx as reverse proxy with gunicorn for my Django app, and am new to webserver configuration. My app has a postgres backend, and the machine hosting it has Ubuntu 14.04 lts installed.
I have reason to suspect that my nginx configuration is not forwarding proxy set header to the Django app correctly. Is there a way I can see (e.g. print?) the host, http_user_agent, remote_addr etc. forwarded, on the linux command line to test my gut feel?
Secondly, how do I check whether my Django app received the correct forwarded IP? E.g. can I somehow print?
/etc/nginx/sites-available/myproject:
server {
listen 80;
server_name example.cloudapp.net;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/mhb11/folder/myproject;
}
location / {
proxy_set_header Host $host;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/home/mhb11/folder/myproject/myproject.sock;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/mhb11/folder/myproject/templates/;
}
}
All you have to do is print out request.META at the Django project level to see what all of those values are being set to. This automatically happens for you if you get an error and debug is set to True (just scroll down, you'll see a big table with all request.META values populated therein).
Or you can print it yourself from your views.py, or if that doesn't work, then from any middleware you have. You can even write custom middleware for this. Let me know if you need further clarification on this, I can give you basic code snippets too.
This question was posted a long time ago but since I landed here when needed help, below are some code which I ended up using to attain the same goal for the help of other people.
def getClientIP(request):
x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR')
if x_forwarded_for:
ip = x_forwarded_for.split(',')[-1].strip()
else:
ip = request.META.get('REMOTE_ADDR')
return ip

Prevent site being down during updates using nginx and Python

I have an active site that's hosted on Ubuntu, uses nginx, and the site is written in Python (CherryPy is the server, Bottle is the framework).
I have a shell script that copies python files that I upload over the existing live site ones which then of course results in CherryPy restarting the server so it's running the latest code (how I want it). The problem is, in between the time it's stopping and started a default static page is displayed to any unlucky person who tries to view a page on the site at that time (hope they aren't submitting a form). I've seen this page a bunch of times while updating.
My current setup is two copies of the site running on two ports reverse proxied with nginx. So I figured if I update one, wait a few seconds, then update the other then the site will be up 100% of the time, but this doesn't appear to be the case?
Lets say I have reverse proxy on ports 8095 and 8096, both show the same site but two identical copies of it on the hard drive. I update the python files for port 8095 which causes that port to go down while CherryPy restarts it. Shouldn't everyone then be hitting 8096? It doesn't seem to be working like this. I have an 8 second delay in my file copy script and according to CherryPy logs the 2nd stopped to restart 6 seconds after the 1st was already finished restarting, yet I saw the default static offline page that's displayed when the server is down. I'm confused. According to logs there was always one port up.
Here's part of my nginx.conf:
upstream app_servers {
server 127.0.0.1:8095;
server 127.0.0.1:8096;
}
server {
server_name www.mydomain.com;
listen 80;
error_page 502 503 /offline/offline.html;
location /offline {
alias /usr/share/nginx/html/mysite/1/views/;
}
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
}
}
Try this: From the manual
upstream app_servers {
server 127.0.0.1:8095 max_fails=1 fail_timeout=1;
server 127.0.0.1:8096 max_fails=1 fail_timeout=1;;
}

Flask app gives ubiquitous 404 when proxied through nginx

I've got a flask app daemonized via supervisor. I want to proxy_pass a subfolder on the localhost to the flask app. The flask app runs correctly when run directly, however it gives 404 errors when called through the proxy. Here is the config file for nginx:
upstream apiserver {
server 127.0.0.1:5000;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://apiserver;
proxy_next_upstream error timeout http_502;
proxy_buffering off;
}
For instance, when I go to http://127.0.0.1:5000/me, I get a valid response from the app. However when I go to http://127.0.0.1/api/me I get a 404 from the flask app (not nginx). Also, the flask SERVER_NAME variable is set to 127.0.0.1:5000, if that's important.
I'd really appreciate any suggestions; I'm pretty stumped! If there's anything else I need to add, let me know!
I suggest not setting SERVER_NAME.
If SERVER_NAME is set, it will 404 any requests that don't match the value.
Since Flask is handling the request, you could just add a little bit of information to the 404 error to help you understand what's passing through to the application and give you some real feedback about what effect your nginx configuration changes cause.
from flask import request
#app.errorhandler(404)
def page_not_found(error):
return 'This route does not exist {}'.format(request.url), 404
So when you get a 404 page, it will helpfully tell you exactly what Flask was handling, which should help you to very quickly narrow down your problem.
I ran into the same issue. Flask should really provide more verbose errors here since the naked 404 isn't very helpful.
In my case, SERVER_NAME was set to my domain name (e.g. example.com).
nginx was forwarding requests without the server name, and as #Zoidberg notes, this caused Flask to trigger a 404.
The solution was to configure nginx to use the same server name as Flask.
In your nginx configuration file (e.g. sites_available or nginx.conf, depending on where you're defining your server):
server {
listen 80;
server_name example.com; # this should match Flask SERVER_NAME
...etc...

Categories

Resources