Background
A docker container running a supervisord process with 2 processes started - nginx and uwsgi (yes I understand this may be doing docker 'wrong'. That's not the question)
The uwsgi hits a python flask app. This has a logger connected, and prints the headers dictionary to the info log.
I have a postman request that tests from my local box, hits the docker container, routes via nginx and hits the python app, with the info log appended.
Custom headers sent by postman are being logged (thanks to ignore_invalid_headers off;)
The Problem
I'd like to use nginx to decorate incoming requests with some further headers. No matter what I try I can't get it to work. None of the headers I add in the nginx conf seems to make it through to the flask app.
I've tried proxy_set_header or uwgi_param. No variant seems to work.
Please note - I want a request header. I believe add_header is for response headers.
nginx.conf:
user nginx;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
underscores_in_headers off;
ignore_invalid_headers off;
upstream myapp {
server unix:/run/myapp.sock;
}
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass myapp;
proxy_set_header x-proxy-set-header x-proxy-set-header-value;
proxy_set_header sampl-header ONE;
uwsgi_param X-add-uwsgi-param x-added-uwsfi-param-value;
}
}
}
daemon off;
Any help would be hugely appreciated!!
So. Solved. As Richard Smith also found- proxy_pass doesn't work as I'm using uwsgi_pass for the custom protocol.
SO. this works:
location / {
include uwsgi_params;
uwsgi_pass myapp;
uwsgi_pass_request_headers on;
uwsgi_param HTTP_X_TESTING 'bar';
}
And we're cooking on gas...
Air Punch
Related
I am currently trying to launch my Django website using Nginx and Gunicorn but my static files are not being found. I am on a Linux AMI not Ubuntu which makes things a bit harder and different because I got Nginx with
sudo amazon-linux-extras install nginx1.12
I was looking at these tutorials
http://www.threebms.com/index.php/2020/07/27/set-up-a-django-app-in-aws-with-gunicorn-nginx/
https://linuxtut.com/en/ce98f7afda7738c8cc1b/
but whenever I launch my website with
gunicorn --bind 0.0.0.0:8000 myappname.wsgi
it always says my static files are not found....
I have already done
python manage.py collectstatic
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static/")
This is my config file found at. sudo vi /etc/nginx/nginx.conf
I don't really know if I should keep the first server part that was there as default but the only part that is not default is the second server but the tutorials say just add a new one to the end
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
server {
listen 8000;
#not real address but same format
server_name 12.18.123.613;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/
{
autoindex on;
alias /home/ec2-user/pydjangoenv/myprojname/static/;
}
location / {
proxy_pass http://12.18.123.613;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers PROFILE=SYSTEM;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
and the tree goes like this
Django-cloud9 - /home/ec2/user
pydjangoenv
myprojname
blog - this is an app
myprojname
static
users - this is an app
manage.py
requirements.txt
env
I have really been stuck on this for three days, any help is appreciated :-)
EDIT
After adding
urlpatterns =+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
to my urls.py the Django serves the static files, but not when debug is false
the static files don't work, I really have tried everything, please help.
When I tried to upload a big csv file of size about 600MB in my project which is hosted in the digital ocean, it tries to upload but shows 502 Bad Gateway Error (Nginx).
The application is a data conversion application.
This works fine while working locally.
sudo tail -30 /var/log/nginx/error.log
shows
[error] 132235#132235: *239 upstream prematurely closed connection while reading response header from upstream, client: client's ip , server: ip, request: "POST /submit/ HTTP/1.1", upstream: "http://unix:/run/gunicorn.sock:/submit/", host: "ip", referrer: "http://ip/"
sudo nano /etc/nginx/sites-available/myproject
shows
server {
listen 80;
server_name ip;
client_max_body_size 999M;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias /root/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
nginx.conf
user root;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
I have also the javascript loader running while the conversion process takes place.
How can I fix this?
If you are using django 3.1 or higher you can make your file processing asynchronous this way and return a response to the user while the file conversion takes place.
Your view should look something like this...
import asyncio
from django.http import JsonResponse
from asgiref.sync import sync_to_async
#sync_to_async
def crunching_stuff(my_file):
# do your conversion here
async def upload(request):
json_payload = {
"message": "Your file is being converted"
}
my_file = request.POST.get('file')
asyncio.create_task(crunching_stuff(my_file))
return JsonResponse(json_payload)
On the front end if you use Dropzone.js your user can see the file upload progress and will get a response quicker. this is a better user experience.
https://www.dropzonejs.com/
This error can indicate multiple problems. The fact it works for you locally strengthen the probability the issue relies on the nginx side.
You can try to solve it by increasing the timeout thresholds (as suggested here), and the buffers size. Add this to your server's nginx.conf:
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
502 error can be of anything.
Check your Nginx error log as following
tail /var/log/nginx/error.log -f
In my case it is because the header is too big. So one had to increase the buffer size in /etc/nginx/sites-enabled/default as Chen.A had described.
I have a e-commerce project written in python and flask framework, I keep shopping cart information on session, when I try add product to session, nginx gives this error:
upstream sent too big header while reading response header from upstream, client: xx.xxx.xx.xxx, server: mysite.com, request: "POST /add_to_cart HTTP/1.1", upstream: "uwsgi://unix:/path/uwsgi.sock:", host: "mysite.com"
This occurs when I have a lot of information in session,
I tried adding fastcgi and proxy_buffer parameters, but still not working, here is my nginx conf file:
server {
listen 443 ssl;
server_name mysite.com;
ssl_certificate /path/nginx.pem;
ssl_certificate_key /path/nginx.key;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
access_log /path/access.log main;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
location /static/ {
alias /path/web/static/;
access_log off;
index index.html index.htm;
}
location / {
try_files $uri #uwsgi;
root /path/www/;
index index.html index.htm;
}
location #uwsgi {
include uwsgi_params;
uwsgi_pass unix:/path/web/uwsgi.sock;
}
}
If you're capable of reconstructing exact POST request via curl or otherwise measure the actual header size, you can specify the proper size for uwsgi_buffer_size (the directive that is related in your case).
Here's my post that has some insight into a similiar directive, proxy_buffer_size. There are many *_buffer_size directives, each "proxy"-like NGINX module has one (fastcgi, proxy, uwsgi), but how you approach their tuning (and how they essentially work) is the same.
You can try, without measurement, by placing directly in server block:
uwsgi_buffer_size 16k;
uwsgi_busy_buffers_size 24k;
Trying to setup Nginx handling 2 domains I stucked with some problems. While my setup with two domains works correctly with static html handling, tried to push forward and start two python apps behind Nginx. I tried with some differents wsgi containers, and different micro frameworks, but the problem is that Nginx can't handle virtual hosts, rather it serves only one app at both domain adresses.
Here is Nginx conf:
user www-data;
worker_processes 8;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
server {
listen 80;
server_name www.domainA.com;
root /var/www/domainA.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Originating-IP $remote_addr;
proxy_set_header HTTP_REMOTE_ADDR $remote_addr;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_set_header CLIENT_IP $remote_addr;
proxy_pass http://127.0.0.2:7000;
}
}
server {
listen 80;
server_name www.domainB.com;
root /var/www/domainB.com;
location / {
... ... blah blah...same story...except this proxy pass.....
proxy_pass http://127.0.0.1:5000;
}
}
}
Any help ?
EDIT:
Just tried to add empty server block as 1st block and it return 404.
Are these outward facing websites?
If you put the full ip in your listen clause you should start working correctly.
listen 512.548.595.485:80;
Right now you have the server ip for both sites which is causing a conflict.
Hope this helps.
In the senario where virtual hosts share an ip and port, nginx selects the right virtual host by comparing the Host header sent by the client to each servers' server_name entry. If you have curl use the following to see exactly what you're sending for the Host header:
curl -s --trace-ascii - http://www.domainA.com | grep 'Host:'
To make your server_name more flexible use the .example.com notation. This is shorthand for example.com and *.example.com. Or just add as many server_name entries as you need.
Next confirm your apps are listening on the right ips and ports. Shell into your server and try:
curl -I 'http://127.0.0.1:5000'
curl -I 'http://127.0.0.2:7000'
Finally I ended with such problem. In testing conditions I didn't add all flavours which would make Nginx satisfied. Then I found THIS LINK :
If the “Host” header field does not match a server name, NGINX will route the request to the default server for this port. The default server is the first one listed in the nginx.conf file. This will be overridden if the default_server parameter is set in the listen directive within a server context. An example is given below.
Nginx docs and tutorials are dispersed on few web locations so finding few doesn't mean that you got all answers you need.
I think, this is your solution. Create a BASH file whose name should be virtualhost.sh. Copy and paste the following code:
#!/bin/bash
domain=$1
root="/data/$domain"
block="/etc/nginx/sites-available/$domain"
# Create the Document Root directory
mkdir -p $root
# Assign ownership to your regular user account
chown -R $USER:$USER $root
# Create the Nginx server block file:
tee $block > /dev/null <<EOF
server {
listen 80;
listen [::]:80;
root /data/$domain;
index index.php index.html index.htm;
server_name $domain www.$domain;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
include fastcgi_params;
}
location ~ /\.ht {
access_log off;
log_not_found off;
deny all;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
access_log off;
log_not_found off;
expires 30d;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
}
EOF
# Link to make it available
ln -s $block /etc/nginx/sites-enabled/
# Test configuration and reload if successful
nginx -t && service nginx reload
You need call this BASH file:
virtualhost.sh www.yourdomain.com
Could someone please post an nginx configuration file that shows how to properly route the following URLs to to gunicorn:
http://www.example.com
https://www.example.com
http://testing.example.com
https://testing.example.com
Some questions:
why do some nginx configuration files contain an "upstream command?"
I am running 2N+1 gunicorn workers. Would I also need multiple nginx workers? by that, I mean should I even the "worker_processes" command since nginx is just supposed to serve static files?
how to setup buffering/caching?
server {
listen 80 default_server deferred;
listen 443 default_server deferred ssl;
listen [::]:80 ipv6only=on default_server deferred;
listen [::]:443 ipv6only=on default_server deferred ssl;
server_name example.com www.example.com testing.example.com;
root /path/to/static/files
# Include SSL stuff
location / {
location ~* \.(css|gif|ico|jpe?g|js[on]?p?|png|svg|txt|xml)$ {
access_log off;
add_header Cache-Control "public";
add_header Pragma "public";
expires 365d;
log_not_found off;
tcp_nodelay off;
open_file_cache max=16 inactive=600s; # 10 minutes
open_file_cache_errors on;
open_file_cache_min_uses 2;
open_file_cache_valid 300s; # 5 minutes
}
try_files $uri #gunicorn;
}
location #gunicorn {
add_header X-Proxy-Cache $upstream_cache_status;
expires epoch;
proxy_cache proxy;
proxy_cache_bypass $nocache;
proxy_cache_key "$request_method#$scheme://$server_name:$server_port$uri$args";
proxy_cache_lock on;
proxy_cache_lock_timeout 2000;
proxy_cache_use_stale error timeout invalid_header updating http_500;
proxy_cache_valid 200 302 1m;
proxy_cache_valid 301 1D;
proxy_cache_valid any 5s;
proxy_http_version 1.1;
proxy_ignore_headers Cache-Control Expires;
proxy_max_temp_file_size 1m;
proxy_no_cache $nocache;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gunicorn;
}
}
And answering your other questions:
The upstream directive can be used to simplify any *_pass directives in your nginx configuration and for load balancing situations. If you have more than one gunicorn server you can do something like the following:
upstream gunicorn {
server http://gunicorn1;
server http://gunicorn2;
}
server {
location {
proxy_pass gunicorn;
}
}
Set worker_processes of nginx to auto if your nginx version already has the auto option. The amount of worker processes of your nginx has nothing to do with the worker process of your gunicorn application. And yes, even if you are only serving static files, setting the correct amount of worker processes will increase the total amount of requests your nginx can handle and it's therefor recommended to set it up right. If your nginx version doesn't have the auto option simply set it to your real physical CPU count or real physical CPU core count.
I included a sample configuration for caching the responses from your gunicorn application server and the open files cache of UNIX based systems for the static files. I think it's pretty obvious how to set things up. If you want me to explain any special directive in great detail simply leave a comment and I'll edit my answer.