How do you append www. to a Django url? - python

Currently I am using nginx and uWSGI to host my website. I need to append www. to my urls, but I'm not sure what is the best route to take.
Should I be doing this at the nginx level?

Yes, nginx is the most efficient way to prepend (or append) www, though Django provides a settings PREPEND_WWW that does the exact same thing when set to True.
E.g. in your nginx config:
server {
listen 80;
server_name example.com;
return 301 http://www.example.com$request_uri;
}

Related

Django - request.is_secure always returns False

I'm running a Django project on a DigitalOcean VPS using Nginx and Gunicorn. I made sure that i'm using HTTPS, but for some reason using request.is_secure() always returns False, and request.scheme returns HTTP, even though i made sure it's VPS.
What could be the reason for that? Here is my nginx config:
server {
listen 80;
server_name MY.SERVER.com;
location / {
include proxy_params;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/var/www/proj/myproj.sock;
}
}
And i also made sure to add to my Django settings SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https'). Any advice is appreciated
I ran into the same issue. And, it looks like, I found out why it doesn't work as expected.
Accordingly to the documentation, $scheme equals either http or https.
In the case of a location, which is declared in server that listens on 80 port, we get $scheme equal to http. Then, AFAIU, Django receives HTTP_X_FORWARDED_PROTO header which equals to http, and Django treats it as unsecured (i.e. request.is_secure() always returns False). Well, at least, it started to work well when I made the following change:
proxy_set_header X-Forwarded-Proto "https";

How to use django-hosts with Nginx

I have created one Django app which has two apps named "api" and "consumer". Now I want to use subdomains for both of this app. Like api.server.com and server.com. I searched online and found django-hosts so I implemented in my localhost and its working fine.
After that I deployed it on AWS EC2 instance and created subdomain in Godaddy and point both root domain and subdomain to my instance IP. Root domain is working fine but when I try to go api.server.com, it shows me default Welcome to Nginx screen. Please help me with this issue.
nginx.conf
server{
server_name server.com, api.server.com;
access_log /var/log/nginx/example.log;
location /static/ {
alias /home/path/to/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/username/project/project.sock;
}
}
You don't need the , a simple space will do.
server_name server.com api.server.com;
Also you can use wildcards, see the documentation.
server_name *.server.com;
You don't have to use a plugin (like django-hosts) to achieve what you are trying to do. Create 2 different nginx configurations for each subdomain you want to create (server.com and api.server.com), and forward requests from api.server.com to /api URL and request from server.com to /. Following is a basic example.
server.com
server {
listen 80;
server_name server.com;
location / {
proxy_pass http://127.0.0.1:3000$request_uri;
}
}
api.server.com
server {
listen 80;
server_name api.server.com;
location / {
proxy_pass http://127.0.0.1:3000/api$request_uri;
}
}
I recommend not to depend on 3rd party plugins unnecessarily. Refer https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/ for more details.

Flask SERVER_NAME setting best pratices

Since my app has background tasks, I use the Flask context. For the context to work, the Flask setting SERVER_NAME should be set.
When the SERVER_NAME is set the incoming requests are checked to match this value or the route isn't found. When placing an nginx (or other webserver in front), the SERVER_NAME should also include the port and the reverse proxy should handle the rewrite stuff, hiding the port number from the outside world (which it does).
For session cookies to work in modern browsers, the URL name passed by the proxy should be the same as the SERVER_NAME, otherwise the browser refuses to send the cookies. This can be solved by adding the official hostname in the /etc/hosts and setting it to 127.0.0.1.
There is one thing that I haven't figured out yet and it is the URL in the background tasks. url_for() is used with the _external option to generate URLs in the mail it sends out. But that URL includes the port, which is different from the 443 port used by my nginx instance.
Removing the port from the SERVER_NAME makes the stuff described in the first paragraph fail.
So what are my best options for handling the url_for in the mail. Create a separate config setting? Create my own url_for?
You should use url_for(location, _external=True)
or include proxy_params if you use nginx.

Nginx + Gunicorn - error pages for static resources

I am running a Python Flask application with Gunicorn and Nginx as a reverse proxy. Pages are served by Gunicorn and Nginx is serving files from my static folder directly.
It's working correctly except where I get a 404 on a static resources.
I have setup custom error handlers in Flask to show 'pretty' error pages on HTTP error codes. This is also working fine when I request a non-existent page.
However, when a static resource doesn't exist then nginx serves its own default 404 page instead of the Flask app's 404 page (which makes sense since it's bypassing Gunicorn). Is there a way to tell nginx to serve the Flask error handler page via Gunicorn if it encounters an error serving a static resource?
Here is my current nginx conf file for this server:
server {
listen 80;
server_name example.com;
access_log /home/aaron/dev/apwd-flask/logs/access.log;
error_log /home/aaron/dev/apwd-flask/logs/error.log;
location / {
include proxy_params;
proxy_pass http://localhost:8000;
}
location /static {
alias /home/aaron/dev/apwd-flask/app/static/;
}
}
I'm thinking (hoping) I can use an error_page directive to give control back to Gunicorn and tell it to serve the appropriate custom error handler, but haven't been able to figure out if that's possible or how to do it from the documentation.
Answering my own question as I was able to locate an answer to this after alot of searching so posting it here for the benefit of anyone else who may have the same issue. I expect this would work for any backend, not just gunicorn.
https://www.nginx.com/resources/admin-guide/serving-static-content/
In the section entitled 'Trying Several Options' the final example shows the solution to this problem, using the try_files directive in the static location block I can tell nginx to pass the request to a named location if it fails to find the requested file.
Here is my new nginx conf file which is working as expected now for non-existent static file requests:
server {
listen 80;
server_name example.com;
access_log /home/aaron/dev/apwd-flask/logs/access.log;
error_log /home/aaron/dev/apwd-flask/logs/error.log;
location #apwd_flask {
include proxy_params;
proxy_pass http://localhost:8000;
}
location / {
try_files $uri #apwd_flask;
}
location /static {
alias /home/aaron/dev/apwd-flask/app/static/;
try_files $uri #apwd_flask;
}
}
Now my location #apwd_flask is the gunicorn backend and when my static files aren't found by nginx serving directly, it sends the request to the backend which serves its own 404 response.
You need to change the owner of files in below directory
/home/aaron/dev/apwd-flask/app/static/
In order to give access to nginx user to read files in the static directory change the owner to www-data or change the owner group to www-data and give the read access to all files in this directory.
You can do this by running below command:
chown -R www-data:www-data /home/aaron/dev/apwd-flask/app/static/

Authentication on NGINX aganist Tornado

well, I just setup NGINX and now its working.
As my BackEnd WebServer under NGINX I have Python Tornado running:
I only use NGINX for allow big uploads (large-sized), so one of my URL (for upload) is served on NGINX and the rest of URLs are served by Tornado.
I use Sessions provided by Tornado (running at http://localhost:8080/), and NGINX is running at http://localhost:8888/;
Well this is my nginx config file:
location /images/upload {
upload_pass /after_upload;
.....
.....
.....
}
location /after_upload {
proxy_pass http://localhost:8080/v1/upload/;
}
As you see, there aren't anything about authentication on NGINX.
URL for proxy_pass requiere a session valid (provided by Tornado)
This is scheme of the system is the following:
When users log in the system, system create a Tornado (tornado sessions) session in server and in user's Browser, so I need pass authentication to NGINX and continue this authentication process again in Tornado Service.
How I change NginX for authenticate against Tornado?
Thanks in advance
Well, Nginx works as a Proxy, therefore is not necessary make changes in Tornado or in your application. For my application I just add rewrites from NGINX urls towards Tornado urls. So this includes all traffic (auth, etc.) and all HTTP structures like if you were working in Tornado.
server {
listen 8888; ## listen for ipv4
server_name localhost;
access_log /var/log/nginx/localhost.access.log;
client_max_body_size 100000M;
location / {
#Real Location URL for Tornado.
proxy_pass http://localhost:8080/;
}
}
Key is proxy_pass , where every requests for 8888 port are passed to 8080 port in localhost.
Everything is passed to Tornado BackEnd from Nginx.

Categories

Resources