I have a e-commerce project written in python and flask framework, I keep shopping cart information on session, when I try add product to session, nginx gives this error:
upstream sent too big header while reading response header from upstream, client: xx.xxx.xx.xxx, server: mysite.com, request: "POST /add_to_cart HTTP/1.1", upstream: "uwsgi://unix:/path/uwsgi.sock:", host: "mysite.com"
This occurs when I have a lot of information in session,
I tried adding fastcgi and proxy_buffer parameters, but still not working, here is my nginx conf file:
server {
listen 443 ssl;
server_name mysite.com;
ssl_certificate /path/nginx.pem;
ssl_certificate_key /path/nginx.key;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
access_log /path/access.log main;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
location /static/ {
alias /path/web/static/;
access_log off;
index index.html index.htm;
}
location / {
try_files $uri #uwsgi;
root /path/www/;
index index.html index.htm;
}
location #uwsgi {
include uwsgi_params;
uwsgi_pass unix:/path/web/uwsgi.sock;
}
}
If you're capable of reconstructing exact POST request via curl or otherwise measure the actual header size, you can specify the proper size for uwsgi_buffer_size (the directive that is related in your case).
Here's my post that has some insight into a similiar directive, proxy_buffer_size. There are many *_buffer_size directives, each "proxy"-like NGINX module has one (fastcgi, proxy, uwsgi), but how you approach their tuning (and how they essentially work) is the same.
You can try, without measurement, by placing directly in server block:
uwsgi_buffer_size 16k;
uwsgi_busy_buffers_size 24k;
Related
When I tried to upload a big csv file of size about 600MB in my project which is hosted in the digital ocean, it tries to upload but shows 502 Bad Gateway Error (Nginx).
The application is a data conversion application.
This works fine while working locally.
sudo tail -30 /var/log/nginx/error.log
shows
[error] 132235#132235: *239 upstream prematurely closed connection while reading response header from upstream, client: client's ip , server: ip, request: "POST /submit/ HTTP/1.1", upstream: "http://unix:/run/gunicorn.sock:/submit/", host: "ip", referrer: "http://ip/"
sudo nano /etc/nginx/sites-available/myproject
shows
server {
listen 80;
server_name ip;
client_max_body_size 999M;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias /root/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
nginx.conf
user root;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
I have also the javascript loader running while the conversion process takes place.
How can I fix this?
If you are using django 3.1 or higher you can make your file processing asynchronous this way and return a response to the user while the file conversion takes place.
Your view should look something like this...
import asyncio
from django.http import JsonResponse
from asgiref.sync import sync_to_async
#sync_to_async
def crunching_stuff(my_file):
# do your conversion here
async def upload(request):
json_payload = {
"message": "Your file is being converted"
}
my_file = request.POST.get('file')
asyncio.create_task(crunching_stuff(my_file))
return JsonResponse(json_payload)
On the front end if you use Dropzone.js your user can see the file upload progress and will get a response quicker. this is a better user experience.
https://www.dropzonejs.com/
This error can indicate multiple problems. The fact it works for you locally strengthen the probability the issue relies on the nginx side.
You can try to solve it by increasing the timeout thresholds (as suggested here), and the buffers size. Add this to your server's nginx.conf:
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
502 error can be of anything.
Check your Nginx error log as following
tail /var/log/nginx/error.log -f
In my case it is because the header is too big. So one had to increase the buffer size in /etc/nginx/sites-enabled/default as Chen.A had described.
I'm having an issue getting nginx to pass over control of routing to my django server. By default it checks '/' path and if the user isn't logged in redirects to '/login' and then upon login passes it back to '/'. The login page works fine until you submit then it throws an 'Internal server error'. The server is ubuntu 16.4. Also the python is 3.5 and inside a virtualenv. Let me know if I need to provide the gunicorn service config.
My nginx is as follows:
server {
server_name example.com;
rewrite ^(.*) https://www.example.com permanent;
}
server {
listen 80;
server_name www.example.com;
rewrite ^(.*) https://www.example.com permanent;
}
server {
listen 443 ssl;
server_name www.example.com;
ssl on;
ssl_certificate /etc/nginx/12345678.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {root /home/ubuntu/app; }
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/app/app.sock;
}
}
Background
A docker container running a supervisord process with 2 processes started - nginx and uwsgi (yes I understand this may be doing docker 'wrong'. That's not the question)
The uwsgi hits a python flask app. This has a logger connected, and prints the headers dictionary to the info log.
I have a postman request that tests from my local box, hits the docker container, routes via nginx and hits the python app, with the info log appended.
Custom headers sent by postman are being logged (thanks to ignore_invalid_headers off;)
The Problem
I'd like to use nginx to decorate incoming requests with some further headers. No matter what I try I can't get it to work. None of the headers I add in the nginx conf seems to make it through to the flask app.
I've tried proxy_set_header or uwgi_param. No variant seems to work.
Please note - I want a request header. I believe add_header is for response headers.
nginx.conf:
user nginx;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
underscores_in_headers off;
ignore_invalid_headers off;
upstream myapp {
server unix:/run/myapp.sock;
}
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass myapp;
proxy_set_header x-proxy-set-header x-proxy-set-header-value;
proxy_set_header sampl-header ONE;
uwsgi_param X-add-uwsgi-param x-added-uwsfi-param-value;
}
}
}
daemon off;
Any help would be hugely appreciated!!
So. Solved. As Richard Smith also found- proxy_pass doesn't work as I'm using uwsgi_pass for the custom protocol.
SO. this works:
location / {
include uwsgi_params;
uwsgi_pass myapp;
uwsgi_pass_request_headers on;
uwsgi_param HTTP_X_TESTING 'bar';
}
And we're cooking on gas...
Air Punch
I'm worried that this question may be one that could be answered very simply if I just knew what to look for, so I apologise if this is something that's been addressed
I've set up a production web server for a Django app using nginx and uwsgi. It's got a let's encrypt SSL certificate installed, and now I'd like to automate the renewal.
I used the method referenced in this article to add the certificate: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04 by adding the .well-known directory to the server block.
location ~ /.well-known {
allow all;
}
I've tried to keep this but the /.well-known is now 403 forbidden from nginx when the rest of the server config is added (provided below)
Can anyone tell me what I've done wrong or how to solve this?
here's the server config file:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name www.website.co.uk;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
include snippets/ssl-website.co.uk.conf;
include snippets/ssl-params.conf;
location /.well-known/ {
root /home/user/website;
allow all;
}
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/website;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/home/user/website/website.sock;
}
}
Thanks in advance. I'm still quite new to this and trying to learn.
i had a similar problem. This answer was my solution.
https://stackoverflow.com/a/38949101/4098053
I hope this will help you too!
Could someone please post an nginx configuration file that shows how to properly route the following URLs to to gunicorn:
http://www.example.com
https://www.example.com
http://testing.example.com
https://testing.example.com
Some questions:
why do some nginx configuration files contain an "upstream command?"
I am running 2N+1 gunicorn workers. Would I also need multiple nginx workers? by that, I mean should I even the "worker_processes" command since nginx is just supposed to serve static files?
how to setup buffering/caching?
server {
listen 80 default_server deferred;
listen 443 default_server deferred ssl;
listen [::]:80 ipv6only=on default_server deferred;
listen [::]:443 ipv6only=on default_server deferred ssl;
server_name example.com www.example.com testing.example.com;
root /path/to/static/files
# Include SSL stuff
location / {
location ~* \.(css|gif|ico|jpe?g|js[on]?p?|png|svg|txt|xml)$ {
access_log off;
add_header Cache-Control "public";
add_header Pragma "public";
expires 365d;
log_not_found off;
tcp_nodelay off;
open_file_cache max=16 inactive=600s; # 10 minutes
open_file_cache_errors on;
open_file_cache_min_uses 2;
open_file_cache_valid 300s; # 5 minutes
}
try_files $uri #gunicorn;
}
location #gunicorn {
add_header X-Proxy-Cache $upstream_cache_status;
expires epoch;
proxy_cache proxy;
proxy_cache_bypass $nocache;
proxy_cache_key "$request_method#$scheme://$server_name:$server_port$uri$args";
proxy_cache_lock on;
proxy_cache_lock_timeout 2000;
proxy_cache_use_stale error timeout invalid_header updating http_500;
proxy_cache_valid 200 302 1m;
proxy_cache_valid 301 1D;
proxy_cache_valid any 5s;
proxy_http_version 1.1;
proxy_ignore_headers Cache-Control Expires;
proxy_max_temp_file_size 1m;
proxy_no_cache $nocache;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gunicorn;
}
}
And answering your other questions:
The upstream directive can be used to simplify any *_pass directives in your nginx configuration and for load balancing situations. If you have more than one gunicorn server you can do something like the following:
upstream gunicorn {
server http://gunicorn1;
server http://gunicorn2;
}
server {
location {
proxy_pass gunicorn;
}
}
Set worker_processes of nginx to auto if your nginx version already has the auto option. The amount of worker processes of your nginx has nothing to do with the worker process of your gunicorn application. And yes, even if you are only serving static files, setting the correct amount of worker processes will increase the total amount of requests your nginx can handle and it's therefor recommended to set it up right. If your nginx version doesn't have the auto option simply set it to your real physical CPU count or real physical CPU core count.
I included a sample configuration for caching the responses from your gunicorn application server and the open files cache of UNIX based systems for the static files. I think it's pretty obvious how to set things up. If you want me to explain any special directive in great detail simply leave a comment and I'll edit my answer.