Trying to setup Nginx handling 2 domains I stucked with some problems. While my setup with two domains works correctly with static html handling, tried to push forward and start two python apps behind Nginx. I tried with some differents wsgi containers, and different micro frameworks, but the problem is that Nginx can't handle virtual hosts, rather it serves only one app at both domain adresses.
Here is Nginx conf:
user www-data;
worker_processes 8;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
server {
listen 80;
server_name www.domainA.com;
root /var/www/domainA.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Originating-IP $remote_addr;
proxy_set_header HTTP_REMOTE_ADDR $remote_addr;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_set_header CLIENT_IP $remote_addr;
proxy_pass http://127.0.0.2:7000;
}
}
server {
listen 80;
server_name www.domainB.com;
root /var/www/domainB.com;
location / {
... ... blah blah...same story...except this proxy pass.....
proxy_pass http://127.0.0.1:5000;
}
}
}
Any help ?
EDIT:
Just tried to add empty server block as 1st block and it return 404.
Are these outward facing websites?
If you put the full ip in your listen clause you should start working correctly.
listen 512.548.595.485:80;
Right now you have the server ip for both sites which is causing a conflict.
Hope this helps.
In the senario where virtual hosts share an ip and port, nginx selects the right virtual host by comparing the Host header sent by the client to each servers' server_name entry. If you have curl use the following to see exactly what you're sending for the Host header:
curl -s --trace-ascii - http://www.domainA.com | grep 'Host:'
To make your server_name more flexible use the .example.com notation. This is shorthand for example.com and *.example.com. Or just add as many server_name entries as you need.
Next confirm your apps are listening on the right ips and ports. Shell into your server and try:
curl -I 'http://127.0.0.1:5000'
curl -I 'http://127.0.0.2:7000'
Finally I ended with such problem. In testing conditions I didn't add all flavours which would make Nginx satisfied. Then I found THIS LINK :
If the “Host” header field does not match a server name, NGINX will route the request to the default server for this port. The default server is the first one listed in the nginx.conf file. This will be overridden if the default_server parameter is set in the listen directive within a server context. An example is given below.
Nginx docs and tutorials are dispersed on few web locations so finding few doesn't mean that you got all answers you need.
I think, this is your solution. Create a BASH file whose name should be virtualhost.sh. Copy and paste the following code:
#!/bin/bash
domain=$1
root="/data/$domain"
block="/etc/nginx/sites-available/$domain"
# Create the Document Root directory
mkdir -p $root
# Assign ownership to your regular user account
chown -R $USER:$USER $root
# Create the Nginx server block file:
tee $block > /dev/null <<EOF
server {
listen 80;
listen [::]:80;
root /data/$domain;
index index.php index.html index.htm;
server_name $domain www.$domain;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
include fastcgi_params;
}
location ~ /\.ht {
access_log off;
log_not_found off;
deny all;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
access_log off;
log_not_found off;
expires 30d;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
}
EOF
# Link to make it available
ln -s $block /etc/nginx/sites-enabled/
# Test configuration and reload if successful
nginx -t && service nginx reload
You need call this BASH file:
virtualhost.sh www.yourdomain.com
Related
I've deployed a Django application on DigitalOcean.
First off, when i try to secure this with https and ssl, I get this error.
when i run nginx -t :
nginx: [emerg] invalid parameter "server_name" in /etc/nginx/sites-enabled/django:12
nginx: configuration file /etc/nginx/nginx.conf test failed
upstream app_server {
server unix:/home/django/gunicorn.socket fail_timeout=0;
}
server {
#listen 80 default_server;
#listen [::]:80 default_server ipv6only=on;
listen 443 ssl
server_name domain.com
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
root /usr/share/nginx/html;
index index.html index.htm;
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
# Your Django project's media files - amend as required
location /media {
alias path/to/media;
}
# your Django project's static files - amend as required
location /static {
alias path/to/static;
}
# Proxy the static assests for the Django Admin panel
location /static/admin {
alias path/to/staticadmin;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://app_server;
}
}
server {
listen 80;
server_name domain.com;
return 301 https://$host$request_uri;
}
Furthermore, I can access the website using the ip address but not the domain name registered.It results in a 400 bad request page.
Could this be an issue with the settings.py ?
for reference in settings.pyALLOWED_HOSTS=['*']. What list do I provide in the ip_addresses() function?
Are these two problems related?
using Django v1.10.5
You're missing semicolons on a bunch of lines, that's why nginx -t is failing.
I have been trying to solve this problem three entire days without solution. Now, I am under pressure at my work and I really need your help.
I know that nginx is listen to correct port '20154' using netstat, also I have run the command nginx -t and its ok. The logs have no error because client can not reach the server.
Maybe the problem is with uwsgi.init I don't know, so I put here my cons files and uwsgi init files
I hope can solve this problem with your help and solve this learning more.
nginx.conf file:
user user;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 768;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
nginx enabled sites
upstream django {
server unix:///home/ctag/env_Compass4D/Compass4D/Compass4D.sock; # for a file socket
}
server {
listen 20154;
location /assets/ {
root /home/ctag/env_Compass4D/Compass4D/;
}
location /doc/ {
alias /usr/share/doc/;
#alias /home/ctag/Compass4D/env_Compass4D/Compass4D
autoindex on;
#allow 127.0.0.1;
}
location / {
#uwsgi_pass unix:/home/ctag/env_Compass4D/Compass4D/Compass4D.sock;
proxy_pass http://unix:/home/ctag/env_Compass4D/Compass4D/Compass4D.sock;
#proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
location /Compass4D {
root /home/ctag/env_Compass4D/Compass4D/;
}
uwsgi.init
# Compass4D_uwsgi.ini file
[uwsgi]
# Configuraciones Django
# ruta al directorio del proyecto (ruta completa)
chdir = /home/ctag/env_Compass4D/Compass4D/
# Archivo wsgi de Django
module = Compass4D.wsgi
# master
master = true
# numero de procesos (trabajadores)
processes = 5
# Ruta al socket
socket = /home/ctag/env_Compass4D/Compass4D/Compass4D.sock
# Permisos del socket
chmod-socket = 666
# Loggeo para detectar fallo al startup
#logto = /tmp/errlog
# Al cerrar limpiar el ambiente
vacuum = true
This is the new configuration that worked for me, you can see the changes, also the command that I had to use, thanks.
The new sites-enabled file:
upstream django {
server unix:///home/ctag/env_Compass4D/Compass4D/Compass4D.sock; # for a file socket
}
server {
listen 80; ## listen for ipv4; this line is default and implied
server_name ~^.*$;
location /static/ {
root /home/ctag/env_Compass4D/Compass4D/;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
}
location / {
#uwsgi_pass unix:/home/ctag/env_Compass4D/Compass4D/Compass4D.sock;
proxy_pass http://unix:/home/ctag/env_Compass4D/Compass4D/Compass4D.sock;
#proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
location /Compass4D/ {
root /home/ctag/env_Compass4D/Compass4D/;
}
The uWSGI command that I used to run server in background:
uwsgi --ini env_Compass4D/Compass4D/Compass4D_uwsgi.ini &
I'm trying to get the following setup to work with gunicorn and nginx. Everything works until I add the second server config...
upstream app_server_djangoapp {
server localhost:8002 fail_timeout=0;
}
server {
listen 80;
server_name api.domain.tld;
access_log /var/log/nginx/guni-access.log;
error_log /var/log/nginx/guni-error.log info;
keepalive_timeout 5;
# Size in megabytes to allow for uploads.
client_max_body_size 20M;
# path for static files
root /home/username/webapps/guni/static;
location /docs/ {
autoindex on;
alias /srv/site/docs/buildHTML/html/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}
}
server {
listen 80;
server_name flower.domain.tld;
location / {
proxy_pass http://localhost:5555;
}
What I'm I doing wrong? I need to have two subdomains one mapped to my django app and other mapped to my monitoring software on 5555 (flower)
log files states:
2014/11/21 12:03:27 [emerg] 962#0: unexpected end of file, expecting
"}" in /etc/nginx/sites-enabled/default:47
Your code is missing a closing "}" at the very end:
server {
listen 80;
server_name flower.domain.tld;
location / {
proxy_pass http://localhost:5555;
}
}
For future reference:
You can run nginx -t (with sudo if needed) to test the configuration before reloading nginx - this will give you a quite good description of any errors you might have in your configuration file(s).
Could someone please post an nginx configuration file that shows how to properly route the following URLs to to gunicorn:
http://www.example.com
https://www.example.com
http://testing.example.com
https://testing.example.com
Some questions:
why do some nginx configuration files contain an "upstream command?"
I am running 2N+1 gunicorn workers. Would I also need multiple nginx workers? by that, I mean should I even the "worker_processes" command since nginx is just supposed to serve static files?
how to setup buffering/caching?
server {
listen 80 default_server deferred;
listen 443 default_server deferred ssl;
listen [::]:80 ipv6only=on default_server deferred;
listen [::]:443 ipv6only=on default_server deferred ssl;
server_name example.com www.example.com testing.example.com;
root /path/to/static/files
# Include SSL stuff
location / {
location ~* \.(css|gif|ico|jpe?g|js[on]?p?|png|svg|txt|xml)$ {
access_log off;
add_header Cache-Control "public";
add_header Pragma "public";
expires 365d;
log_not_found off;
tcp_nodelay off;
open_file_cache max=16 inactive=600s; # 10 minutes
open_file_cache_errors on;
open_file_cache_min_uses 2;
open_file_cache_valid 300s; # 5 minutes
}
try_files $uri #gunicorn;
}
location #gunicorn {
add_header X-Proxy-Cache $upstream_cache_status;
expires epoch;
proxy_cache proxy;
proxy_cache_bypass $nocache;
proxy_cache_key "$request_method#$scheme://$server_name:$server_port$uri$args";
proxy_cache_lock on;
proxy_cache_lock_timeout 2000;
proxy_cache_use_stale error timeout invalid_header updating http_500;
proxy_cache_valid 200 302 1m;
proxy_cache_valid 301 1D;
proxy_cache_valid any 5s;
proxy_http_version 1.1;
proxy_ignore_headers Cache-Control Expires;
proxy_max_temp_file_size 1m;
proxy_no_cache $nocache;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gunicorn;
}
}
And answering your other questions:
The upstream directive can be used to simplify any *_pass directives in your nginx configuration and for load balancing situations. If you have more than one gunicorn server you can do something like the following:
upstream gunicorn {
server http://gunicorn1;
server http://gunicorn2;
}
server {
location {
proxy_pass gunicorn;
}
}
Set worker_processes of nginx to auto if your nginx version already has the auto option. The amount of worker processes of your nginx has nothing to do with the worker process of your gunicorn application. And yes, even if you are only serving static files, setting the correct amount of worker processes will increase the total amount of requests your nginx can handle and it's therefor recommended to set it up right. If your nginx version doesn't have the auto option simply set it to your real physical CPU count or real physical CPU core count.
I included a sample configuration for caching the responses from your gunicorn application server and the open files cache of UNIX based systems for the static files. I think it's pretty obvious how to set things up. If you want me to explain any special directive in great detail simply leave a comment and I'll edit my answer.
I'm using Nginx as webserver, with a reverse proxy to a gunicorn django server.
I tried using the SSLRedirect snippet from here:
http://djangosnippets.org/snippets/85/
Because this snippet would always return false from is_secure() with my setup, resulting in a redirect loop, I had to make some changes.
SSL works, but when I access http://domain.net/main it doesn't redirect to https://domain.net/main. Isn't it supposed to do that?
Below outlines the modification I made:
if 'HTTP_X_FORWARDED_PROTOCOL' in request.META:
return True
And in my nginx conf (I only need SSL, http not required):
server {
listen 8888;
server_name domain.net;
ssl on;
ssl_certificate /path/to/domain.pem;
ssl_certificate_key /path/to/domain.key;
# serve directly - analogous for static/staticfiles
location /media/ {
root /path/to/root;
}
location /static/ {
root /path/to/root;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://127.0.0.1:8881/;
# note this line
proxy_set_header X-Forwarded-Protocol https;
}
}
Just do it entirely with nginx. No need to involve Django at all:
server {
listen 80;
rewrite ^(.*) https://$host$1 permanent;
}
server {
listen 443;
# The rest of your original server config here
}