I'm trying to host an app using Nginx on Linode.com but I'm stuck early on uWSGI config.
I've used "Getting Started" guide and "WSGI using uWSGI and nginx on Ubuntu 12.04 (Precise Pangolin)" guide and I've succesfully deployed Nginx (got Nginx welcome message in browser).
Although above tutorial is for Ubuntu 12.04 I've used 14.04.
The problem starts when I got to uWSGI configuration and 'Hello World' Python app. Going to location / in browser returns Failed to load resource: the server responded with a status of 500 (Internal Server Error) and nothing gets logged in server error.log. location /static works though and serves files without a hitch.
I've tried many things and looked extensively for fix on Google and Stackoverflow but nothing, and I'm kind of frustrated right now.
Thank you for any help.
Here are my config files (I've hidden my domain and ip):
/etc/hosts
127.0.0.1 localhost
127.0.1.1 ubuntu
XX.XX.XX.XXX mars
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
/etc/nginx/sites-enabled/example.com
server {
listen 80;
server_name $hostname;
access_log /srv/www/example.com/logs/access.log;
error_log /srv/www/example.com/logs/error.log;
location / {
#uwsgi_pass 127.0.0.1:9001;
uwsgi_pass unix:///run/uwsgi/app/example.com/example.com.socket;
include uwsgi_params;
uwsgi_param UWSGI_SCHEME $scheme;
uwsgi_param SERVER_SOFTWARE nginx/$nginx_version;
}
location /static {
root /srv/www/example.com/public_html/;
index index.html index.htm;
}
}
/etc/uwsgi/apps-enabled/example.com.xml
<uwsgi>
<plugin>python</plugin>
<socket>/run/uwsgi/app/example.com/example.com.socket</socket>
<pythonpath>/srv/www/example.com/application/</pythonpath>
<app mountpoint="/">
<script>wsgi_configuration_module</script>
</app>
<master/>
<processes>4</processes>
<harakiri>60</harakiri>
<reload-mercy>8</reload-mercy>
<cpu-affinity>1</cpu-affinity>
<stats>/tmp/stats.socket</stats>
<max-requests>2000</max-requests>
<limit-as>512</limit-as>
<reload-on-as>256</reload-on-as>
<reload-on-rss>192</reload-on-rss>
<no-orphans/>
<vacuum/>
</uwsgi>
/srv/www/example.com/application/wsgi_configuration_module.py
import os
import sys
sys.path.append('/srv/www/example.com/application')
os.environ['PYTHON_EGG_CACHE'] = '/srv/www/example.com/.python-egg'
def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
return 'Hello world!'
last access log
XX.XX.XX.XXX - - [05/Jul/2015:10:03:37 -0400] "GET / HTTP/1.1" 500 32 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.130 Safari/537.36"
XX.XX.XX.XXX - - [05/Jul/2015:10:03:38 -0400] "GET /favicon.ico HTTP/1.1" 500 32 "http://example.com/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.130 Safari/537.36"
only error log I've got only one time when trying to fix this
2015/07/05 08:49:06 [crit] 25301#0: *17 connect() to unix:///run/uwsgi/app/example.com/example.com.socket failed (2: No such file or directory) while connecting to upstream, client: XX.XX.XX.XXX, server: mars, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:///run/uwsgi/app/example.com/example.com.socket:", host: "example.com"
2015/07/05 08:49:07 [crit] 25301#0: *17 connect() to unix:///run/uwsgi/app/example.com/example.com.socket failed (2: No such file or directory) while connecting to upstream, client: XX.XX.XX.XXX, server: mars, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///run/uwsgi/app/example.com/example.com.socket:", host: "example.com", referrer: "http://example.com/"
I do not understand what is it /etc/nginx/sites-enabled/dev.host.in? how to or why *.in?
i think u should try it
step 1.
create project.ini file
# django_project.ini file
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /home/username/django_project
# Django's wsgi file
module = blog.wsgi
# the virtualenv (full path)
home = /home/username/Env/project
# process-related settings
master = true
pidfile = /tmp/proj_uwsgi.pid
# maximum number of worker processes
processes = 5
# the socket
socket = :8001
# ... with appropriate permissions - may be needed
# chmod-socket = 664
# clear environment on exit
vacuum = true
# background the process
daemonize = /home/username/django_project/error_uwsgi.log
step 2.
create mysite.conf in nginx, vim /etc/nginx/conf.d/mysite.conf
upstream django {
#server unix:///home/username/django_project/djproj.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name localhost mysite.com www.mysite.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media { alias /home/username/django_project/media; # your Django project's media files - amend as required
}
location /static { alias /home/username/django_project/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass 127.0.0.1:8001;#project;
include /home/username/django_project/uwsgi_params; # the uwsgi_params file you installed
}
}
step 3.
ln -s /etc/nginx/uwsgi_params /home/username/django_project/
step 4.
uwsgi --ini django_project.ini
uwsgi --stop /tmp/proj_uwsgi.pid
uwsgi --reload /tmp/proj_uwsgi.pid
Related
i have a problem to deploy my django server with uwsgi and nginx.
The command dev_ralph runserver 0.0.0.0:8000 starts the development server and works fine.
But my goal now is to deploy a production django server and as i said, i have some problems with that.
My project-root-directory is: /home/ralphadmin/uwsgi/capentory-ralph/ralph
Here is the nginx-virtualhost-configuration:
/etc/nginx/sites-available/ralph.conf and /etc/nginx/sites-enabled/ralph.conf:
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
# server unix:///home/ralphadmin/uwgsi/capentory-ralph/ralph/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 10.70.7.1; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /var/media; # your Django project's media files - amend as required
}
location /static {
root /home/ralphadmin/uwsgi/capentory-ralph/ralph/src/ralph/static/; # your Django project's static files
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/ralphadmin/uwsgi/capentory-ralph/ralph/uwsgi_params; # the uwsgi_params file you ins$
}
}
As you can see, my static-files are located in /home/ralphadmin/uwsgi/capentory-ralph/ralph/src/ralph/static/
And if i try to connect with the server, that default nginx page is in my browser:
nginx
I would really appreciate some helpful comments.
----------- Update 1
I investigated my nginx-logs and found this:
2020/02/11 15:18:44 [error] 19097#19097: *1 connect() failed (111: Connection refused)
while connecting to upstream, client: 10.70.7.254, server: 10.70.7.1, request: "GET /
HTTP/1.1", upstream: "uwsgi://10.70.7.1:8630", host: "10.70.7.1"
2020/02/11 15:31:55 [error] 19492#19492: *1 connect() failed (111: Connection refused)
while connecting to upstream, client: 10.70.7.254, server: 10.70.7.1, request: "GET
/favicon.ico HTTP/1.1", upstream: "uwsgi://10.70.7.1:8630", host: "10.70.7.1",
referrer: "http://10.70.7.1/"
... uwsgi is running but still, it doesn't work at all :(
If I use chunked transfer encoding with nginx, uwsgi and flask I always get Content-Length in headers together with Transfer-Encoding: chunked. However, HTTP 1.1 prohibits this behaviour. I have tried to configure nginx and uwsgi to achieve desired behaviour (no Content-Length in headers when Transfer-Encoding: chunked) but without success. Firstly, there is my server and client code:
Server code (server.py):
from flask import Flask
from flask import request
application = Flask(__name__)
#application.route('/', methods=['PUT'])
def hello():
print(request.headers)
print(request.environ.get('SERVER_PROTOCOL'))
return "Hello World!"
if __name__ == "__main__":
application.run(host='0.0.0.0')
Client code (client.py):
import requests
def get_data():
yield b'This is test file.'
yield b'This is test file.'
r = requests.request(
method='PUT',
url='http://127.0.0.1:5000/',
data=get_data(),
headers={
'Content-type': 'text/plain',
'X-Accel-Buffering': 'no',
}
)
print('Response: ', r.text)
If I run server and try to connect to server with client I get the following output. Server output:
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
Host: 127.0.0.1:5000
User-Agent: python-requests/2.18.4
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
Content-Type: text/plain
X-Accel-Buffering: no
Transfer-Encoding: chunked
HTTP/1.1
[pid: 18455|app: 0|req: 1/1] 127.0.0.1 () {34 vars in 412 bytes} [Wed Jan 17 08:24:53 2018] PUT / => generated 12 bytes in 0 msecs (HTTP/1.1 200) 2 headers in 79 bytes (1 switches on core 0)
Client output:
Response: Hello World!
For now, everything seems alright. In headers, we have Transfer-Encoding without Content-Length. Now, I try to incorporate uwsgi (uwsgi.py):
from server import application
if __name__ == "__main__":
application.run()
I run the following command:
$ uwsgi --http-socket localhost:5000 -w wsgi
The output is the same as in the previous attempt. Therefore, still as expected. Now, I will try to deploy nginx. My uwsgi configuration (uwsgi.ini):
[uwsgi]
module = wsgi
master = true
processes = 5
socket = /tmp/flask.sock
chmod-socket = 777
vacuum = true
die-on-term = true
My nginx configuration (/etc/nginx/nginx.conf):
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 5000;
server_name 127.0.0.1;
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/flask.sock;
proxy_request_buffering off;
proxy_buffering off;
proxy_http_version 1.1;
chunked_transfer_encoding on;
}
proxy_request_buffering off;
proxy_buffering off;
proxy_http_version 1.1;
chunked_transfer_encoding on;
}
}
I start nginx and then I run:
$ uwsgi --ini uwsgi.ini --wsgi-manage-chunked-input --http-raw-body --http-auto-chunked --http-chunked-input
Now, the output contains Content-Length:
Content-Type: text/plain
Content-Length: 36
Host: 127.0.0.1:5000
User-Agent: python-requests/2.18.4
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
X-Accel-Buffering: no
Transfer-Encoding: chunked
HTTP/1.1
[pid: 20220|app: 0|req: 1/1] 127.0.0.1 () {42 vars in 525 bytes} [Wed Jan 17 08:31:39 2018] PUT / => generated 12 bytes in 1 msecs (HTTP/1.1 200) 2 headers in 79 bytes (1 switches on core 0)
I try different settings for nginx: proxy_request_buffering, proxy_buffering, proxy_http_version and chunked_transfer_encoding in the server and location context without success. I added X-Accel-Buffering: no to headers but it did not solve the problem. Also, I tried different options for uwsgi: wsgi-manage-chunked-input, http-raw-body, http-auto-chunked, http-chunked-input without achieving desired behaviour (Content-Length is still present in headers with Transfer-Encoding: chunked).
I use the following versions of flask, uwsgi and nginx:
Flask==0.12.2
uWSGI===2.1-dev-f74553db
nginx 1.12.2-2
Any idea what can be wrong? Thanks.
I suffered same problem. My environment was uWSGI 2.0.17, with Nginx and Flask 1.0. When the client sent a request with Transfer-Encoding: chunked, the nginx added the Content-Length. (Though this is prohibited by HTTP/1.1 protocol.)
My conclusion was uWSGI does not support request with chunked Tranfer-Encoding. My flask app didn't get the request body of POST method when the request header have Transfer-Encoding only and also have both Transfer-Encoding and Content-Length.
Waitress, the wsgi server for python 2 and 3, solved this problem. If waitress receives the request with Transfer-Encoding header, he ignores it and set the right Content-Length. (refer to https://github.com/Pylons/waitress/blob/c18aa5e24e8d96bb64c9454a626147f37a23f0f0/waitress/parser.py#L154)
The flask official document also recommends the waitree to run a production flask server. (http://flask.pocoo.org/docs/1.0/tutorial/deploy/)
I'm trying to migrate django app from ubuntu 14.04 to raspberry pi ( raspbian os)
for ubuntu i have done http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html and it worked.
in raspbian it's not so simple.
this is my bills_nginx.conf in /etc/nginx/sites-enabled
bills_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:/var/www/html/bills/bills/bills.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 192.168.5.5; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /var/www/html/bills/bills/bills/media; # your Django project's media files - amend as required
}
location /static {
alias /var/www/html/bills/bills/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /var/www/html/bills/bills/uwsgi_params; # the uwsgi_params file you installed
}
}
and this is my UWSGI INI file:
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /var/www/html/bills/bills
# Django's wsgi file
module = bills.wsgi
# the virtualenv (full path)
home = /home/seb/.virtualenvs/bills3
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 10
# the socket (use the full path to be safe
socket = /var/www/html/bills/bills/bills.sock
# ... with appropriate permissions - may be needed
uid =www-data
gid=www-data
chown-socket=www-data:www-data
chmod-socket = 666
# clear environment on exit
vacuum = true
daemonize=/var/log/uwsgi/bills3.log
error_log=/var/log/nginx/bills3_error.log
in error.log I get:
2017/03/08 10:27:43 [error] 654#0: *1 connect() to unix:/var/www/html/bills/bills/bills.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.5.2, server: 192.168.5.5, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:/var/www/html/bills/bills/bills.sock:", host: "192.168.5.5:8000"
please help me get it working :)
chmod-socket, chown-socket, gid, uid, socket For uWSGI and nginx to communicate over a socket, you need to specify the permissions and the owner of the socket. 777 as chmod-socket is much too liberal for production. However, you may have to mess around with this number to get it correct, so everything necessary can communicate. If you don’t take care of your socket configurations, you will get errors such as:
So make sure the permission of the folder ..
I think better way
$ sudo mkdir /var/uwsgi
$ sudo chown www-data:www-data /var/uwsgi
And change the socket path
upstream django {
server unix:/var/uwsgi/bills.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first) }
More reference : Pls check A great article
http://monicalent.com/blog/2013/12/06/set-up-nginx-and-uwsgi/
Also I have the same issue before may you can check my configuration too
nginx django uwsgi page not found error
I have a JQuery request for JSON data in my Django app (which started in vitrualenv with uwsgi and nginx):
$.getJSON( url, function( data ) {
var obl = "/raj/?raj=" + data.id;
$.getJSON( obl, function( raj_data ) {
...
} );
} );
and corresponding view:
def rcoato(request):
response_data = SprRegion.objects.values('id').get(Q(id=request.GET['region']))
response_data = json.dumps(response_data)
return HttpResponse(response_data, content_type='application/javascript')
It works fine, JSON data returned, but only when I logged in via SSH.
I and start my application with:
source virtualenv/bin/activate
uwsgi --xml /home/rino/sites/centre/uwsgi.xml &
When I logged out (with setopt no_hup and setopt no_checkjobs), my app works partially - HTML pages are rendering, static files are handling, but requests to /raj/?raj=... raises Internal Server Error 500.
My nginx.conf:
server {
listen 8081;
server_name localhost;
access_log /var/log/nginx/nginx_centre_access.log;
error_log /var/log/nginx/nginx_centre_error.log;
location /static {
autoindex on;
alias /home/rino/sites/centre/centre/static/;
}
location / {
uwsgi_pass 127.0.0.1:3031;
include /home/rino/sites/centre/uwsgi_params;
}
}
uwsgi config:
<uwsgi>
<socket>127.0.0.1:3031</socket>
<processes>5</processes>
<pythonpath>/home/rino/sites/centre</pythonpath>
<chdir>/home/rino/sites/centre/centre</chdir>
<wsgi-file>/home/rino/sites/centre/centre/wsgi.py</wsgi-file>
<pidfile>/tmp/centre-master.pid</pidfile>
<plugin>python3</plugin>
<max-requests>5000</max-requests>
<harakiri>40</harakiri>
<master>true</master>
<threads>2</threads>
</uwsgi>
cat nginx_centre_error.log | tail after logout and querying request described above:
2014/10/03 07:34:46 [error] 20657#0: *296 connect() failed (111: Connection refused)
while connecting to upstream, client: 176.100.173.177, server: localhost, request: "GET
/settler/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:3031", host: "myhost.com:8081",
referrer: "http://myhost.com/settlersmain/"
2014/10/03 07:56:55 [error] 20657#0: *335 connect() failed (111: Connection refused)
while connecting to upstream, client: 176.100.173.177, server: localhost, request:
"GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:3031", host: "myhost.com:8081"
2014/10/03 08:23:33 [error] 20657#0: *367 open()
Thanks for any help!
UPD: I replace localhost in server_name line of nginx.conf with server IP address, but issue is still present.
According to your log you have (111: Connection refused) error, and it means that uwsgi process is killed after you close ssh connection.
You can try this instruction to use nohup.
uWSGI has an option to daemonize. This is a better way, uWSGI will handle detaching from console.
But I suggest you use something like supervisord to run uWSGI. Or you can use your system's init.d scripts or Upstart
I am trying to upload file via nginx_upload_module 2.2.0. I have nginx 1.0.4 setup as a reverse proxy with a tornado server at the backend.
Below is my nginx.conf :
#user nobody;
worker_processes 1;
#error_log logs/error.log notice;
#error_log logs/error.log info;
pid /var/log/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
index index.html
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] $request '
# '"$status" $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
upstream frontends {
server 127.0.0.1:8888;
}
server {
listen 80;
server_name localhost;
#charset koi8-r;
# Allow file uploads max 50M for example
client_max_body_size 50M;
#access_log logs/host.access.log main;
error_log /var/log/error.log info;
#POST URLn
location /upload {
# Pass altered request body to this location
upload_pass #after_upload;
# Store files to this directory
upload_store /tmp;
# Allow uploaded files to be read only by user
upload_store_access user:rw;
# Set specified fields in request body
upload_set_form_field $upload_field_name.name “$upload_file_name”;
upload_set_form_field $upload_field_name.content_type “$upload_content_type”;
upload_set_form_field $upload_field_name.path “$upload_tmp_path”;
# Inform backend about hash and size of a file
upload_aggregate_form_field “$upload_field_name.md5” “$upload_file_md5”;
upload_aggregate_form_field “$upload_field_name.size” “$upload_file_size”;
#upload_pass_form_field “some_hidden_field_i_care_about”;
upload_cleanup 400 404 499 500-505;
}
location / {
root /opt/local/html;
}
location #after_upload {
proxy_pass http://127.0.0.1:8888;
}
}
}
I have already tested the setup, and nginx does forward the request to tornado. But when I try to upload the file it gives me a 400: Bad Request http status code. With the tornado log stating that, it's missing the upfile.path in the request. And when I try to go to the folder where nginx should have supposedly stored the uploaded file it isn't there. And hence the 400 error.
Can anyone point why is nginx not storing the file at the specified directory /tmp ?
Tornado Log :
WARNING:root:400 POST /upload (127.0.0.1): Missing argument upfile_path
WARNING:root:400 POST /upload (127.0.0.1) 2.31ms
Nginx Error Log :
127.0.0.1 - - [14/Jul/2011:13:14:31 +0530] "POST /upload HTTP/1.1" 400 73 "http://127.0.0.1/" "Mozilla/5.0 (X11; Linux i686; rv:6.0) Gecko/20100101 Firefox/6.0"
More verbose Error Log with Info option:
2011/07/14 16:17:00 [info] 7369#0: *1 started uploading file "statoverride" to "/tmp/0000000001" (field "upfile", content type "application/octet-stream"), client: 127.0.0.1, server: localhost, request: "POST /upload HTTP/1.1", host: "127.0.0.1", referrer: "http://127.0.0.1/"
2011/07/14 16:17:00 [info] 7369#0: *1 finished uploading file "statoverride" to "/tmp/0000000001", client: 127.0.0.1, server: localhost, request: "POST /upload HTTP/1.1", host: "127.0.0.1", referrer: "http://127.0.0.1/"
2011/07/14 16:17:00 [info] 7369#0: *1 finished cleanup of file "/tmp/0000000001" after http status 400 while closing request, client: 127.0.0.1, server: 0.0.0.0:80
More verbose Error Log with Debug option:
http://pastebin.com/4NVCdmrj
Edit 1:
So, one point that we can infer from the above error log is that, the file is being uploaded by nginx to /tmp but is getting cleaned up subsequently. Don't know why, need help here.
I have just written a web application with tornado and nginx-upload-module,and it works.
According to the tornado log you've provided, I guess you can try to change your code
self.get_argument('upfile_path')
to
self.get_argument('upload_tmp_path')
nginx did store the file,
but "upload_cleanup 400 404 499 500-505;" this line tells it to cleanup the file, when your application response with the specified HTTP code.