I am taking the help of following docs http://tutos.readthedocs.io/en/latest/source/ndg.html and trying to run the server. I have following file in sites-enabled :
upstream test_server {
server unix:/var/www/test/run/gunicorn.sock fail_timeout=10s;
}
# This is not neccessary - it's just commonly used
# it just redirects example.com -> www.example.com
# so it isn't treated as two separate websites
server {
listen 80;
server_name dehatiengineers.in;
return 301 $scheme://www.dehatiengineers.in$request_uri;
}
server {
listen 80;
server_name www.dehatiengineer.in;
client_max_body_size 4G;
access_log /var/www/test/logs/nginx-access.log;
error_log /var/www/test/logs/nginx-error.log warn;
location /static/ {
autoindex on;
alias /var/www/test/ourcase/static/;
}
location /media/ {
autoindex on;
alias /var/www/test/ourcase/media/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://test_server;
break;
}
}
#For favicon
location /favicon.ico {
alias /var/www/test/test/static/img/favicon.ico;
}
#For robots.txt
location /robots.txt {
alias /var/www/test/test/static/robots.txt ;
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/test/ourcase/static/;
}
}
My server's domain name is dehatiengineers.in.
The gunicorn_start.sh file is:
#!/bin/bash
NAME="ourcase" #Name of the application (*)
DJANGODIR=/var/www/test/ourcase # Django project directory (*)
SOCKFILE=/var/www/test/run/gunicorn.sock # we will communicate using this unix socket (*)
USER=root # the user to run as (*)
GROUP=webdata # the group to run as (*)
NUM_WORKERS=1 # how many worker processes should Gunicorn spawn (*)
DJANGO_SETTINGS_MODULE=ourcase.settings # which settings file should Django use (*)
DJANGO_WSGI_MODULE=ourcase.wsgi # WSGI module name (*)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /var/www/test/venv/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /var/www/test/venv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user $USER \
--bind=unix:$SOCKFILE
Now, I am running both of my nginx and gunicorn_start.sh file. nginx is running, as you can see at http://www.dehatiengineers.in/ but the django is not connected to nginx. Now on sudo sh gunicorn_start.sh, I am getting following output:
Starting ourcase as root
gunicorn_start.sh: 16: gunicorn_start.sh: source: not found
[2016-05-20 06:21:45 +0000] [10730] [INFO] Starting gunicorn 19.5.0
[2016-05-20 06:21:45 +0000] [10730] [INFO] Listening at: unix:/var/www/test/run/gunicorn.sock (10730)
[2016-05-20 06:21:45 +0000] [10730] [INFO] Using worker: sync
[2016-05-20 06:21:45 +0000] [10735] [INFO] Booting worker with pid: 10735
But, in browser, I am just getting nginx output, not the original one.
Apart from this, I am having issue in running gunicorn_ourcase.service file:
[Unit]
Description=Ourcase gunicorn daemon
[Service]
Type=simple
User=root
ExecStart=/var/www/test/gunicorn_start.sh
[Install]
WantedBy=multi-user.target
running issues:
$ systemctl enable gunicorn_ourcase
systemctl: command not found
Please help me to solve this.
Everything I was doing was fine, except the fact that nginx was using it's default config file. I had to delete/edit the file located in /etc/nginx/sites-enabled/ , with name default. After editing it, everything works fine.
Related
I am new to server configuration. I do some google and config django app using gunicorn and nginx on ubuntu 14.04 trusty server. For the first django app I use port number 80 and my configfiles are :
/etc/init/gunicorn.conf :-
description "Gunicorn application server handling myproject"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid
setgid www-data
chdir /home/myserver/my_virtualenv_path/myproject
exec /home/myserver/my_virtualenv_path/myproject/gunicorn --workers 2 --bind unix:/home/myserver/my_virtualenv_path/myproject/myproject.sock myproject.wsgi:application
My nginx configuration file for first django app:
/etc/nginx/site-available :-
server {
listen 80;
server_name myapp.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/myserver/my_virtualenv_path/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/myserver/my_virtualenv_path/myproject/myproject.sock;
}
}
After that, i link site to site-enabled .
Next, i create a new django app inside the first django app virtualenv like:
FirstApp_Virtual_Env\first_djangoapp\app files
FirstApp_Virtual_Env\Second_djangoapp\app files
Now i configure gunicorn for second app like :
/etc/init/gunicorn_t :-
description "Gunicorn application server handling myproject2"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid
setgid www-data
chdir /home/myserver/my_virtualenv_path/myproject2
exec /home/myserver/my_virtualenv_path/myproject/gunicorn --workers 2 --bind unix:/home/myserver/my_virtualenv_path/myproject2/myproject2.sock myproject2.wsgi:application
My nginx configuration file for second django app:
/etc/nginx/site-available :-
server {
listen 8000;
server_name myapp2.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/myserver/my_virtualenv_path/myproject2;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/myserver/my_virtualenv_path/myproject2/myproject2.sock;
}
}
After that i link site to site-enabled .
Now here is my problem: when i type myapp.com then my first django app is working fine but for second app when i type myapp2.com its showing nginx page and when i type myapp2.com:8000 it's working fine . I do some google for that but i am unable to find solution. I am newbie to this so please give me a hint for that how to correct my problem. Thanks for your time.
You configured nginx to serve myapp2.com on port 8000:
server {
listen 8000;
server_name myapp2.com;
# ...
}
so why would you expect nginx to serve it on port 80 ?
[edit] I thought the above was enough to make the problem clear but obviously not, so let's start again:
You configured nginx to serve myapp2.com on port 8000 (the listen 8000; line in your conf, so nginx do what you asked for: it serves myapp2.com on port 8000.
If you want nginx to serve myapp2.com on port 80 (which is the implied default port for http so you don't have to specify it explicitely in your url - IOW 'http://myapp2.com/' is a shortcut for 'http://myapp2.com:80/'), all you have to do is to configure nginx to serve it on port 80 just like you did for 'myapp.com': replace listen 8000; by listen 80;.
If you don't type in a port, your client will automatically use port 80.
Typing myapp2.com is the same as typing myapp2.com:80
But myapp2.com is not running on port 80, it's running on port 8000.
When you go into production it is possible to redirect myapp2.com to port 8000 without explicitly typing it. You register myapp2.com with a DNS name server and point it towards myapp2.com:8000
I was following this tutorial and successfully ran the project manually. However, after setting up nginx and systemd, it says 502 Bad Gateway.
I have looked through other similar threads to no avail.
I see that my gunicorn workers are running by doing ps -ax | grep gunicorn.
my nginx configuration:
server {
listen 8000;
server_name 127.0.0.1;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ubuntu/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
}
}
and the systemd file:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/myproject
ExecStart=/home/ubuntu/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/myproject/myproject.sock myproject.wsgi:application
[Install]
WantedBy=multi-user.target
The contents of /var/log/nginx/error.log:
2017/02/18 17:57:51 [crit] 1977#1977: *6 connect() to unix:/home/ubuntu/myproject/myproject.sock failed (2: No such file or directory) while connecting to upstream, client: 10.0.2.2, server: 172.30.1.5, request: "GET / HTTP/1.1", upstream: "http://unix:/home/ubuntu/myproject/myproject.sock:/", host: "localhost:8000"
Manually running /home/ubuntu/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/myproject/myproject.sock myproject.wsgi:application is also working. That is, sock file is created.
I feel like I missing something very simple.
What could I be doing wrong?
There could be an ordering thing here - nginx is probably starting before gunicorn, so the socket is not yet there to connect. You should add gunicorn.service to the After directive in the nginx systemd file.
My nginx and uwsgi configuration perfectly working for first one or two requests . Then nginx showing 502 Bad Gateway .When I restart the uwsgi service , same thing happens again. I am using Ubuntu 16.04. Here is my all conf and error log:
nginx conf:
upstream book {
server unix:///tmp/book.sock;
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
......
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
charset utf-8;
client_max_body_size 100M;
access_log /var/log/nginx/example.com_access.log;
error_log /var/log/nginx/example.com_error.log;
location /media {
alias /home/prism/prod/example.com/media;
}
location /static {
alias /home/prism/prod/example.com/static;
}
location / {
uwsgi_pass book;
include /etc/nginx/uwsgi_params;
}
}
/var/log/nginx/example.com_error.log
2016/05/25 17:44:26 [error] 5230#5230: *214 connect() to unix:///tmp/book.sock failed (111: Connection refused) while connecting to upstream, client: 27.*.*.*, server: example.com, request: "GET / HTTP/2.0", upstream: "uwsgi://unix:///tmp/book.sock:", host: "example.com"
hiren.ini
[uwsgi]
chdir=/home/prism/prod/example.com
home = /home/prism/prod/example.com/.env
module=hiren.wsgi
master=True
process = 5
pidfile=/tmp/book.pid
socket= /tmp/book.sock
vacuum=True
max-requests=5000
daemonize=/home/prism/prod/example.com/hiren.log
uid = www-data
gid = www-data
die-on-term = true
and service file:
[Unit]
Description=uWSGI instance to serve example.com
[Service]
User=prism
ExecStart=/bin/bash -c 'cd /home/prism/prod/example.com; source .env/bin/activate; uwsgi --ini hiren.ini'
[Install]
WantedBy=multi-user.target
Solved the problem by tweaking systemd script
[Unit]
Description=uWSGI instance to serve example.com
[Service]
ExecStart=/bin/bash -c 'su prism; cd /home/prism/prod/example.com; source .env/bin/activate; uwsgi --ini hiren.ini'
[Install]
WantedBy=multi-user.target
If you are able to run uwsgi and your .ini file within your virtual environment, and nginx is starting adequately, the issue is then with a systemd file.
I followed this post to serve my django project. The project runs well with manage.py runserver and I want to set it up for production. Here are my setting files:
nginx.conf:
upstream django {
server /tmp/vc.sock;
#server 10.9.1.137:8002;
}
server {
listen 8001;
server_name 10.9.1.137;
charset utf-8;
client_max_body_size 25M;
location /media {
alias /home/deploy/vc/media;
}
location /static {
alias /home/deploy/vc/static;
}
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params;
}
}
uwsgi.ini:
[uwsgi]
chdir = /home/deploy/vc
wsgi-file = vc/wsgi.py
master = true
processes = 2
#socket = :8002
socket = /tmp/vc.sock
chmod-socket = 666
vacuum = true
If I use TCP port socket (server 10.9.1.137:8002 and socket = :8002), it's going to be fine. However if I comment them out and use Unix sockets(server /tmp/vc.sock and socket = /tmp/vc.sock), the server will return 502 error. How should I fix it?
EDIT
Here's the nginx error log when I run /etc/init.d/nginx restart
nginx: [emerg] invalid host in upstream "/tmp/vc.sock" in /etc/nginx/conf.d/vc.conf:2
nginx: configuration file /etc/nginx/nginx.conf test failed
And this is the warning when I run uwsgi --ini vc/uwsgi.ini:
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Can't I run uWSGI as root?
roberto's comment should be an answer!
the syntax in nginx for unix socket path is wrong, you need to prefix
it with unix:
Check your nginx error log, very probably is telling you it does not have permissions to the socket. Unix sockets honour file system permissions, so nginx must have write privilege over the socket file. Short answer: 664 is not enough, you need 666
I'm trying to setup django with nginx and gunicorn by these tutorials http://senko.net/en/django-nginx-gunicorn/ and http://honza.ca/2011/05/deploying-django-with-nginx-and-gunicorn
Setup:
virtualenv --no-site-packages qlimp
cd qlimp
source bin/activate
pip install gunicorn django
django-admin.py startproject qlimp
cd qlimp
python manage.py runserver 0.0.0.0:8001
Gunicorn setup is same as here http://senko.net/en/django-nginx-gunicorn/
changes made (run.sh):
LOGFILE=/var/log/gunicorn/qlimp.log
USER=nirmal
GROUP=nirmal
cd /home/nirmal/qlimp/qlimp
Upstart is also the same as the tutorial in above link
changes made:
exec /home/nirmal/qlimp/qlimp/run.sh
Nginx setup:
server {
listen 80;
server_name qlimp.com;
access_log /home/nirmal/qlimp/log/access.log;
error_log /home/nirmal/qlimp/log/error.log;
location /static {
root /home/nirmal/qlimp/qlimp;
}
location / {
proxy_pass http://127.0.0.1:8888;
}
}
Then I restarted nginx and run the gunicorn server:
sudo /etc/init.d/nginx restart
(qlimp) cd qlimp/qlimp
(qlimp) gunicorn_django -D -c run.sh
When I run the gunicorn server, I'm getting this error:
Failed to read config file: run.sh
Traceback (most recent call last):
File "/home/nirmal/qlimp/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 65, in load_config
execfile(opts.config, cfg, cfg)
File "run.sh", line 3
LOGFILE=/var/log/gunicorn/qlimp.log
^
SyntaxError: invalid syntax
Could anyone guide me? Thanks!
Try /etc/gunicorn.d instead of your own sh file
I Configure Django App with Gunicorn and Nginx on EC2
nano /etc/init/site1.conf
description "site1 web server"
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /home/scripts/gunicorn_runserver.sh
and in gunicorn_runserver.sh
#!/bin/bash
set -e
LOGFILE=/var/log/nginx/site1.log
NUM_WORKERS=10
# user/group to run as
USER=www-data
GROUP=adm
cd /home/projects/project_name/
# source ../../bin/activate
exec gunicorn_django -w $NUM_WORKERS \
--user=$USER --group=$GROUP --log-level=error \
--log-file=$LOGFILE 2>>$LOGFILE
Nginx conf
upstream app_server_hana {
server localhost:8000 fail_timeout=0;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_site1;
break;
}
}
Finally
/etc/init.d/nginx restart
service site1 start