Automate Lets Encrypt for django with nginx and uwsgi - python

I'm worried that this question may be one that could be answered very simply if I just knew what to look for, so I apologise if this is something that's been addressed
I've set up a production web server for a Django app using nginx and uwsgi. It's got a let's encrypt SSL certificate installed, and now I'd like to automate the renewal.
I used the method referenced in this article to add the certificate: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04 by adding the .well-known directory to the server block.
location ~ /.well-known {
allow all;
}
I've tried to keep this but the /.well-known is now 403 forbidden from nginx when the rest of the server config is added (provided below)
Can anyone tell me what I've done wrong or how to solve this?
here's the server config file:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name www.website.co.uk;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
include snippets/ssl-website.co.uk.conf;
include snippets/ssl-params.conf;
location /.well-known/ {
root /home/user/website;
allow all;
}
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/website;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/home/user/website/website.sock;
}
}
Thanks in advance. I'm still quite new to this and trying to learn.

i had a similar problem. This answer was my solution.
https://stackoverflow.com/a/38949101/4098053
I hope this will help you too!

Related

How to use django-hosts with Nginx

I have created one Django app which has two apps named "api" and "consumer". Now I want to use subdomains for both of this app. Like api.server.com and server.com. I searched online and found django-hosts so I implemented in my localhost and its working fine.
After that I deployed it on AWS EC2 instance and created subdomain in Godaddy and point both root domain and subdomain to my instance IP. Root domain is working fine but when I try to go api.server.com, it shows me default Welcome to Nginx screen. Please help me with this issue.
nginx.conf
server{
server_name server.com, api.server.com;
access_log /var/log/nginx/example.log;
location /static/ {
alias /home/path/to/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/username/project/project.sock;
}
}
You don't need the , a simple space will do.
server_name server.com api.server.com;
Also you can use wildcards, see the documentation.
server_name *.server.com;
You don't have to use a plugin (like django-hosts) to achieve what you are trying to do. Create 2 different nginx configurations for each subdomain you want to create (server.com and api.server.com), and forward requests from api.server.com to /api URL and request from server.com to /. Following is a basic example.
server.com
server {
listen 80;
server_name server.com;
location / {
proxy_pass http://127.0.0.1:3000$request_uri;
}
}
api.server.com
server {
listen 80;
server_name api.server.com;
location / {
proxy_pass http://127.0.0.1:3000/api$request_uri;
}
}
I recommend not to depend on 3rd party plugins unnecessarily. Refer https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/ for more details.

nginx - upstream sent too big header while reading response header from upstream

I have a e-commerce project written in python and flask framework, I keep shopping cart information on session, when I try add product to session, nginx gives this error:
upstream sent too big header while reading response header from upstream, client: xx.xxx.xx.xxx, server: mysite.com, request: "POST /add_to_cart HTTP/1.1", upstream: "uwsgi://unix:/path/uwsgi.sock:", host: "mysite.com"
This occurs when I have a lot of information in session,
I tried adding fastcgi and proxy_buffer parameters, but still not working, here is my nginx conf file:
server {
listen 443 ssl;
server_name mysite.com;
ssl_certificate /path/nginx.pem;
ssl_certificate_key /path/nginx.key;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
access_log /path/access.log main;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
location /static/ {
alias /path/web/static/;
access_log off;
index index.html index.htm;
}
location / {
try_files $uri #uwsgi;
root /path/www/;
index index.html index.htm;
}
location #uwsgi {
include uwsgi_params;
uwsgi_pass unix:/path/web/uwsgi.sock;
}
}
If you're capable of reconstructing exact POST request via curl or otherwise measure the actual header size, you can specify the proper size for uwsgi_buffer_size (the directive that is related in your case).
Here's my post that has some insight into a similiar directive, proxy_buffer_size. There are many *_buffer_size directives, each "proxy"-like NGINX module has one (fastcgi, proxy, uwsgi), but how you approach their tuning (and how they essentially work) is the same.
You can try, without measurement, by placing directly in server block:
uwsgi_buffer_size 16k;
uwsgi_busy_buffers_size 24k;

Issue with SSL, nginx, gunicorn, and Django in production

I'm having an issue getting nginx to pass over control of routing to my django server. By default it checks '/' path and if the user isn't logged in redirects to '/login' and then upon login passes it back to '/'. The login page works fine until you submit then it throws an 'Internal server error'. The server is ubuntu 16.4. Also the python is 3.5 and inside a virtualenv. Let me know if I need to provide the gunicorn service config.
My nginx is as follows:
server {
server_name example.com;
rewrite ^(.*) https://www.example.com permanent;
}
server {
listen 80;
server_name www.example.com;
rewrite ^(.*) https://www.example.com permanent;
}
server {
listen 443 ssl;
server_name www.example.com;
ssl on;
ssl_certificate /etc/nginx/12345678.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {root /home/ubuntu/app; }
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/app/app.sock;
}
}

sendfile() failed (32: Broken pipe) while sending request to upstream nginx 502

I am running Django, uwsgi, ngix server.
My server works fine for GET, POST requests of smaller size. But when POSTing requests of large size, nginx returns 502:
nginx error.log is:
2016/03/01 13:52:19 [error] 29837#0: *1482 sendfile() failed (32: Broken pipe) while sending request to upstream, client: 175.110.112.36, server: server-ip, request: "POST /me/amptemp/ HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock:", host: "servername"
So, in order to find where the real problem is, I ran uwsgi on a different port and checked if any error occurs with the same request. But the request was successful. So, the problem is with nginx or unix socket configuration.
Ngin-x configuration:
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/uwsgi.sock; # for a file socket
# server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 52.25.29.179; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/usman/Projects/trequant/trequant-python/trequant/media; # your Django project's media files - amend as required
}
location /static {
alias /home/usman/Projects/trequant/trequant-python/trequant/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
######## proxy_pass http://127.0.0.1:8000;
######## proxy_set_header Host $host;
######## proxy_set_header X-Real-IP $remote_addr;
######## proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_pass django;
uwsgi_read_timeout 600s;
uwsgi_send_timeout 600s;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
So, any idea what I am doing wrong? Thank you in advance.
Supposedly setting post-buffering = 8192 in your uwsgi.ini file will fix this. I got this answer from a 2.5-yr-old answer here and it implies this fix is not the root cause. Hope it helps!
Another fix is to use a TCP socket instead of a unix socket in your conf files:
In uwsgi.ini, use something like socket = 127.0.0.1:8000 in the [uwsgi] section instead of:
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
In your nginx.conf file (btw in Ubuntu, I'm referring to /etc/nginx/conf.d/nginx.conf, NOT the one simply in /etc/nginx/) use uwsgi_pass 127.0.0.1:8000; instead of include uwsgi_params;
I've posted this as a separate answer because either answer may work, and I'm interested to see which answer helps others the most.
In my case this seemed to be for requests that would have given a 308 redirect. I think my Node backend was sending response before postdata was fully received. Updating the client to hit new endpoint (no redirect) may permanently fix my case. Seems promising.
Set higher body buffer size client_body_buffer_size 1M; This will fix.
References:
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
https://www.nginx.com/resources/wiki/start/topics/examples/full/

Django+Nginx+uWSGI = 504 Gateway Time-out

I am running Ubuntu 10.04, Django 1.3, Nginx 0.8.54 and uWSGI 0.9.7.
Both Nginx and uWSGI load without error. However, when you access my site, it sits for a LONG time and then eventually loads a "504 Gateway Time-out" error.
Here is my Nginx Virtual Host conf file:
server {
listen 80;
server_name www.mysite.com mysite.com;
error_log /home/mysite/log/error.log;
access_log /home/mysite/log/access.log;
location / {
auth_basic "Restricted";
auth_basic_user_file /home/mysite/public/passwd;
include uwsgi_params;
uwsgi_pass unix:///home/mysite/public/myapp.sock;
}
location /media {
alias /home/mysite/public/myapp/media;
}
error_page 401 /coming_soon.html;
location /coming_soon.html {
root /home/mysite/public/error_pages/401;
}
location /401/images {
alias /home/mysite/public/error_pages/401/images;
}
location /401/style {
alias /home/mysite/public/error_pages/401/style;
}
}
My site log shows this:
SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request / !!!
My error log show this:
upstream timed out (110: Connection timed out) while reading response header from upstream
I have two other sites on this server with the same configuration and they load PERFECTLY.
Has anyone else encountered this problem? There are several threads on here that are similar to my issue and I've tried several of those solutions but nothing seems to work.
Thank you in advance for your help!
That error is produced when requests exceed the NGINX uwsgi_read_timeout setting. After NGINX exceeds this limit it closes the socket and then uWSGI tries to write to the closed socket, producing the error that you see from uWSIG.
Make sure your NGINX timeouts are at least as high as uWSGI timeouts (HARAKIRI_TIMEOUT).
unix:///home/mysite/public/myapp.sock;
syntax not correct, use like this:
unix:/home/mysite/public/myapp.sock;

Categories

Resources