I'm new to anything related to servers and am trying to deploy a django application. Today I bought a domain name for the app and am having trouble configuring it so that the base URL does not need the port number at the end of it. I have to type www.trackthecharts.com:8001 to see the website when I only want to use www.trackethecharts.com. I think the problem is somewhere in my nginx, gunicorn or supervisor configuration.
gunicorn_config.py
command = '/opt/myenv/bin/gunicorn'
pythonpath = '/opt/myenv/top-chart-app/'
bind = '162.243.76.202:8001'
workers = 3
root#django-app:~#
nginx config
server {
server_name 162.243.76.202;
access_log off;
location /static/ {
alias /opt/myenv/static/;
}
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
supervisor config
[program:top_chart_gunicorn]
command=/opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py djangoTopChartApp.wsgi
autostart=true
autorestart=true
stderr_logfile=/var/log/supervisor_gunicorn.err.log
stdout_logfile=/var/log/supervisor_gunicorn.out.log
Thanks for taking a look.
You should bind to port 80, the default http port. Then make sure in /etc/nginx/sites-enabled/, your are listening on port 80.
By binding to port 80, you will not need to explicitly specify one in your url.
Related
I need to get the client IP address of a websocket connection for some extra functionality I would like to implement. I have an existing deployed Django server running an Nginx-Gunicorn-Uvicorn Worker-Redis configuration. As one might expect, during development, whilst running a local server, everything works as expected. However, when deployed, I receive the error NoneType object is not subscriptable when attempting to access the client IP address of the websocket via self.scope["client"][0].
Here are the configurations and code:
NGINX Config:
upstream uvicorn {
server unix:/run/gunicorn.sock;
}
server {
listen 80;
server_name <ip address> <hostname>;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://uvicorn;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
}
location /ws/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_pass http://uvicorn;
}
location /static/ {
root /var/www/serverfiles/;
autoindex off;
}
location /media {
alias /mnt/apps;
}
}
Gunicorn Config:
NOTE: ExecStart has been formatted for readability, it is one line in the actual config
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=django
Group=www-data
WorkingDirectory=/srv/server
Environment=DJANGO_SECRET_KEY=
Environment=GITEA_SECRET_KEY=
Environment=MSSQL_DATABASE_PASSWORD=
ExecStart=/bin/bash -c "
source venv/bin/activate;
exec /srv/server/venv/bin/gunicorn
--workers 3
--bind unix:/run/gunicorn.sock
--timeout 300
--error-logfile /var/log/gunicorn/error.log
--access-logfile /var/log/gunicorn/access.log
--log-level debug
--capture-output
-k uvicorn.workers.UvicornWorker
src.server.asgi:application
"
[Install]
WantedBy=multi-user.target
Code throwing the error:
#database_sync_to_async
def _set_online_if_model(self, set_online: bool) -> None:
model: MyModel
for model in MyModel.objects.all():
if self.scope["client"][0] == model.ip:
model.online = set_online
model.save()
This server has been running phenomenally in its current configuration before my need to access connect client IP addresses. It handles other websocket connections just fine without any issues.
I've already looked into trying to configure my own custom UvicornWorker according to the docs. I'm not at all an expert in this, so I might have misunderstood what I was supposed to do: https://www.uvicorn.org/deployment/#running-behind-nginx
from uvicorn.workers import UvicornWorker
class ServerUvicornWorker(UvicornWorker):
def __init__(self, *args, **kwargs) -> None:
self.CONFIG_KWARGS.update({"proxy_headers": True, "forwarded_allow_ips": "*"})
super().__init__(*args, **kwargs)
I also looked at https://github.com/django/channels/issues/546 which mentioned a --proxy-headers config for Daphne, however, I am not running Daphne. https://github.com/django/channels/issues/385 mentioned that HTTP headers are passed to the connect method of a consumer, however, that post is quite old and no longer relavent as far as I can tell. I do not get any additional **kwargs to my connect method.
Client IP has nothing to do with channels
self.scope["client"][0] is undefined because when you receive data from the front end at the backend there is no data with the name client. so try to send it from the frontend. you can send a manual, static value at first to verify and then find techniques to read the IP address and then send it.
I have what I believe to be a slightly convoluted Django/Gunicorn/NGINX stack that is giving me trouble as I try to migrate it from the django development server to a production setup with Gunicorn and NGINX. Specifically, I'm having trouble serving static files.
SYSTEM ARCHITECTURE: I have one physical server with a public IP address. This physical server hosts 3 VM's on a private virtual network (NAT). Each VM run's it's own django project on port 8001. I forward port 8001 on each VM to unique available ports on the physical machine. So, in summary, the architecture looks like the following:
PRIVATE VIRTUAL NETWORK VM's:
VM1 runs "site 1" on VM1 port 8001
VM2 runs "site 2" on VM2 port 8001
VM3 runs "site 3" on VM3 port 8001
HOST SERVER:
Publishes "site 1" on host port 8001 (VM port 8001 fwd.to Host port 8001)
Publishes "site 2" on host port 8002 (VM port 8001 fwd.to Host port 8002)
Publishes "site 3" on host port 8003 (VM port 8001 fwd.to Host port 8003)
This works really well for development with the django development server. I just need to include a port # in the URL. As in:
myserver.edu:8001 for site 1
myserver.edu:8002 for site 2
myserver.edu:8003 for site 3
For production I would like to setup NGINX to serve the sites as:
myserver.edu/site1 for site 1
myserver.edu/site2 for site 2
myserver.edu/site3 for site 3
I installed NGINX on the host machine and configured it to use TLS. The NGINX configuration on the host machine defines 3 locations below with following config. You can see that I use the proxy_pass to route traffic for each site to the Virtual Network on the host machine.
location /site1/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_Forwarded-Proto $scheme;
proxy_pass http://192.168.122.243:8001/;
}
location /site2/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_Forwarded-Proto $scheme;
proxy_pass http://192.168.122.244:8001/;
}
location /site3/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_Forwarded-Proto $scheme;
proxy_pass http://192.168.122.245:8001/;
}
So any request for:
List item
myserver.edu/site1 goes to port 8001 on VM1
List item
myserver.edu/site2 goes to port 8001 on VM2
List item
myserver.edu/site3 goes to port 8001 on VM3
On VM1 I have a django site + gunicorn + NGINX with the following config (same setup on all VMs):
server {
listen 8001;
server_name 0.0.0.0;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/rcrv/bubbles/s2uds_user/gui;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/rcrv/bubbles/s2uds_user/bubs.sock;
}
}
My Site 1 root URL from urls.py is this: url(r'^$', gv.home),
BEHAVIOR UNDER THE ABOVE NGINX PRODUCTION CONFIG:
If I browse site 1 on VM1 from within the NAT with a URL such as:
192.168.122.243:8001/
This will work perfectly - Static and media file are served
If I browse from public IP space with a URL such as:
myerver.edu/site1/
This will render all dynamic content from django but will fail to serve static content (with a 404). My browser developer console shows that the browser is looking for the static content at https://myserver.com/static
Note that I expected it to look for static content at myserver/site1/static
If I modify that URL directly in the browser to be myserver.com/site1/static I can access the static content that was missing
ATTEMPTS TO MITIGATE/FIX INCLUDE:
I changed the location block on the NGINX config of the VM to be:
location = /site1/static/
No luck.
Debugging shows that the browser is still trying to find the static content at: myserver.edu/static
QUESTION:
How the heck do I modify or fix the configuration such that django includes that "/site1/" part in the static URL? I'm inclined to think that this problem is not an NGINX config issue but instead a django problem. Rather, I believe I need to tell django to prepend /site1/ to it's static URL.
Ideas? I've read numerous responses to similar django static file issues but yet on that fixes this.
Thank you.
UPDATE: I've started to figure this out. Here is what worked for me.
I created a /static/ directory at /var/www/static on each VM and set the STATIC_ROOT in the settings.py file to be this location, then ran collect static to copy all static content to this directory.
I then modified static file location block within the NGINX conf on the VM to be:
location /static/ {
root /var/www/;
}
Finally I modified the STATIC_URL in settings.py from: STATIC_URL = /static/ to STATIC_URL = /site1/static/
The result is the static content is now served in production under NGINX. The cause was my incomplete understanding of static content methods in general. Typing out the question here sort of clarified the problem and led me to the solution. Hopes this helps someone with a similar architecture. Now to fix the media files - likely the same approach.
I'm using nginx as reverse proxy with gunicorn for my Django app, and am new to webserver configuration. My app has a postgres backend, and the machine hosting it has Ubuntu 14.04 lts installed.
I have reason to suspect that my nginx configuration is not forwarding proxy set header to the Django app correctly. Is there a way I can see (e.g. print?) the host, http_user_agent, remote_addr etc. forwarded, on the linux command line to test my gut feel?
Secondly, how do I check whether my Django app received the correct forwarded IP? E.g. can I somehow print?
/etc/nginx/sites-available/myproject:
server {
listen 80;
server_name example.cloudapp.net;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/mhb11/folder/myproject;
}
location / {
proxy_set_header Host $host;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/home/mhb11/folder/myproject/myproject.sock;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/mhb11/folder/myproject/templates/;
}
}
All you have to do is print out request.META at the Django project level to see what all of those values are being set to. This automatically happens for you if you get an error and debug is set to True (just scroll down, you'll see a big table with all request.META values populated therein).
Or you can print it yourself from your views.py, or if that doesn't work, then from any middleware you have. You can even write custom middleware for this. Let me know if you need further clarification on this, I can give you basic code snippets too.
This question was posted a long time ago but since I landed here when needed help, below are some code which I ended up using to attain the same goal for the help of other people.
def getClientIP(request):
x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR')
if x_forwarded_for:
ip = x_forwarded_for.split(',')[-1].strip()
else:
ip = request.META.get('REMOTE_ADDR')
return ip
I have deployed my application in Linode. But for testing SD I used a pc as server in my local network where I have running my application. I'd like to share my Nginx and Supervisor config if you could give me some advice if it is right or I've made a mistake.
I modified my hosts file to access with www.mydomain.com.
In my settings.py file I have:
# SWAMPDRAGON
SWAMP_DRAGON_CONNECTION = ('swampdragon_auth.socketconnection.HttpDataConnection', '/data')
DRAGON_URL = 'http://127.0.0.1:9999'
I took as example what I did for celery and reuse that to do something similar with SD:
I created a command file: 'swampdragon_start'
#!bin/bash
NAME="swampdragon"
DJANGODIR=/opt/virtualenvs/myproject/pysrc/myproject/myproject
# Enable virtualenv
cd $DJANGODIR
source ../../../bin/activate
export C_FORCE_ROOT="true"
# Execute SD
exec python manage.py runsd 0.0.0.0:9999
Then I created a file for supervisor: 'swampdragon.conf'
[program:swampdragon]
command = /opt/virtualenvs/myproject/bin/swampdragon_start
user = root
stdout = /path/to/file.log
stderr = /path/to/file.log
autostart = true
autorestart = true
Finally, I made swampdragon_start executable, and add swampdragon to supervisor.
My Nginx config is as follow:
upstream swampdragon{
server 127.0.0.1:9999;
}
proxy_next_upstream off;
server {
listen 80;
server_name www.mydomain.com;
access_log /path/to/log
error_log /path/to/log
location /media/ {
alias /path/to/media
}
location /static/ {
alias /path/to/statics
}
location / {
... # django config
}
}
server {
# this is taken from http://swampdragon.net/blog/deploying-swampdragon/
listen 80;
server_name sd.mydomain.com;
# Websocket
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header Host $http_host;
proxy_pass http://swampdragon;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
That's how I've made it work, but I think Nginx upstream is not working really.
I am sure there's a better way to deploy SD, any suggestion?
Thanks for your time.
Could someone please post an nginx configuration file that shows how to properly route the following URLs to to gunicorn:
http://www.example.com
https://www.example.com
http://testing.example.com
https://testing.example.com
Some questions:
why do some nginx configuration files contain an "upstream command?"
I am running 2N+1 gunicorn workers. Would I also need multiple nginx workers? by that, I mean should I even the "worker_processes" command since nginx is just supposed to serve static files?
how to setup buffering/caching?
server {
listen 80 default_server deferred;
listen 443 default_server deferred ssl;
listen [::]:80 ipv6only=on default_server deferred;
listen [::]:443 ipv6only=on default_server deferred ssl;
server_name example.com www.example.com testing.example.com;
root /path/to/static/files
# Include SSL stuff
location / {
location ~* \.(css|gif|ico|jpe?g|js[on]?p?|png|svg|txt|xml)$ {
access_log off;
add_header Cache-Control "public";
add_header Pragma "public";
expires 365d;
log_not_found off;
tcp_nodelay off;
open_file_cache max=16 inactive=600s; # 10 minutes
open_file_cache_errors on;
open_file_cache_min_uses 2;
open_file_cache_valid 300s; # 5 minutes
}
try_files $uri #gunicorn;
}
location #gunicorn {
add_header X-Proxy-Cache $upstream_cache_status;
expires epoch;
proxy_cache proxy;
proxy_cache_bypass $nocache;
proxy_cache_key "$request_method#$scheme://$server_name:$server_port$uri$args";
proxy_cache_lock on;
proxy_cache_lock_timeout 2000;
proxy_cache_use_stale error timeout invalid_header updating http_500;
proxy_cache_valid 200 302 1m;
proxy_cache_valid 301 1D;
proxy_cache_valid any 5s;
proxy_http_version 1.1;
proxy_ignore_headers Cache-Control Expires;
proxy_max_temp_file_size 1m;
proxy_no_cache $nocache;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gunicorn;
}
}
And answering your other questions:
The upstream directive can be used to simplify any *_pass directives in your nginx configuration and for load balancing situations. If you have more than one gunicorn server you can do something like the following:
upstream gunicorn {
server http://gunicorn1;
server http://gunicorn2;
}
server {
location {
proxy_pass gunicorn;
}
}
Set worker_processes of nginx to auto if your nginx version already has the auto option. The amount of worker processes of your nginx has nothing to do with the worker process of your gunicorn application. And yes, even if you are only serving static files, setting the correct amount of worker processes will increase the total amount of requests your nginx can handle and it's therefor recommended to set it up right. If your nginx version doesn't have the auto option simply set it to your real physical CPU count or real physical CPU core count.
I included a sample configuration for caching the responses from your gunicorn application server and the open files cache of UNIX based systems for the static files. I think it's pretty obvious how to set things up. If you want me to explain any special directive in great detail simply leave a comment and I'll edit my answer.