Django+Nginx+uWSGI = 504 Gateway Time-out - python

I am running Ubuntu 10.04, Django 1.3, Nginx 0.8.54 and uWSGI 0.9.7.
Both Nginx and uWSGI load without error. However, when you access my site, it sits for a LONG time and then eventually loads a "504 Gateway Time-out" error.
Here is my Nginx Virtual Host conf file:
server {
listen 80;
server_name www.mysite.com mysite.com;
error_log /home/mysite/log/error.log;
access_log /home/mysite/log/access.log;
location / {
auth_basic "Restricted";
auth_basic_user_file /home/mysite/public/passwd;
include uwsgi_params;
uwsgi_pass unix:///home/mysite/public/myapp.sock;
}
location /media {
alias /home/mysite/public/myapp/media;
}
error_page 401 /coming_soon.html;
location /coming_soon.html {
root /home/mysite/public/error_pages/401;
}
location /401/images {
alias /home/mysite/public/error_pages/401/images;
}
location /401/style {
alias /home/mysite/public/error_pages/401/style;
}
}
My site log shows this:
SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request / !!!
My error log show this:
upstream timed out (110: Connection timed out) while reading response header from upstream
I have two other sites on this server with the same configuration and they load PERFECTLY.
Has anyone else encountered this problem? There are several threads on here that are similar to my issue and I've tried several of those solutions but nothing seems to work.
Thank you in advance for your help!

That error is produced when requests exceed the NGINX uwsgi_read_timeout setting. After NGINX exceeds this limit it closes the socket and then uWSGI tries to write to the closed socket, producing the error that you see from uWSIG.
Make sure your NGINX timeouts are at least as high as uWSGI timeouts (HARAKIRI_TIMEOUT).

unix:///home/mysite/public/myapp.sock;
syntax not correct, use like this:
unix:/home/mysite/public/myapp.sock;

Related

Why can't my Nginx transmit requests to my web-server?

I'm not a native English speaker, and part of paragraphs below are directly translated from Chinese, so if there's any problem in my expression, please excuse typing or grammatical errors. I am familiar with the technical terms, but some slang expressions and idioms are difficult for me. I've posted my question in English and I'll be glad to translate responses. Any help is appreciated.
I'm trying to run a flask app on my server. However, I met some errors while building it. Since another service which is only available on Windows systems, the typical solution Nginx + uWSGI is not accessible, and I'm using Nginx + Tornado. And when I was trying to run my app, my Nginx wasn't working correctly. I'd tried my app, setting the Tornado to listen 9900 port, and I could get access in my website when requesting http://localhost:9900. But when I visit http://localhost, I saw nothing but a 504 page. I also tried to visit it remotely, and the result is just above. I suppose that there maybe an error in configuring of Nginx, but I'm not sure. Below are my configuration, logs of Nginx and Tornado.
# nginx.conf
worker_processes 1;
pid ./logs/nginx.pid;
error_log ./log debug;
events {
worker_connections 1024;
}
http {
include mime.types;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
proxy_pass https://127.0.0.1:9900;
}
error_page 500 502 503 504 /50x.html;
}
}
# error.log
2022/10/04 13:07:49 [notice] 1632#4400: signal process started
# server.py (for tornado)
from tornado.httpserver import HTTPServer
from tornado.wsgi import WSGIContainer
from app import app # my flask app
from tornado.ioloop import IOLoop
s = HTTPServer(WSGIContainer(app))
s.listen(9900)
IOLoop.current().start()
I'm not sure where did the error happen, since I can see the process of Nginx. If there's anything wrong when I build tornado? Or I've got mistakes while configuring Nginx? I've tried all method I can find on the Internet, but none of them work. Since China has wall, I can only get answers inside our internet. If there's any help elsewhere? After all, any help is appreciated.

Automate Lets Encrypt for django with nginx and uwsgi

I'm worried that this question may be one that could be answered very simply if I just knew what to look for, so I apologise if this is something that's been addressed
I've set up a production web server for a Django app using nginx and uwsgi. It's got a let's encrypt SSL certificate installed, and now I'd like to automate the renewal.
I used the method referenced in this article to add the certificate: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04 by adding the .well-known directory to the server block.
location ~ /.well-known {
allow all;
}
I've tried to keep this but the /.well-known is now 403 forbidden from nginx when the rest of the server config is added (provided below)
Can anyone tell me what I've done wrong or how to solve this?
here's the server config file:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name www.website.co.uk;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
include snippets/ssl-website.co.uk.conf;
include snippets/ssl-params.conf;
location /.well-known/ {
root /home/user/website;
allow all;
}
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/website;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/home/user/website/website.sock;
}
}
Thanks in advance. I'm still quite new to this and trying to learn.
i had a similar problem. This answer was my solution.
https://stackoverflow.com/a/38949101/4098053
I hope this will help you too!

sendfile() failed (32: Broken pipe) while sending request to upstream nginx 502

I am running Django, uwsgi, ngix server.
My server works fine for GET, POST requests of smaller size. But when POSTing requests of large size, nginx returns 502:
nginx error.log is:
2016/03/01 13:52:19 [error] 29837#0: *1482 sendfile() failed (32: Broken pipe) while sending request to upstream, client: 175.110.112.36, server: server-ip, request: "POST /me/amptemp/ HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock:", host: "servername"
So, in order to find where the real problem is, I ran uwsgi on a different port and checked if any error occurs with the same request. But the request was successful. So, the problem is with nginx or unix socket configuration.
Ngin-x configuration:
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/uwsgi.sock; # for a file socket
# server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 52.25.29.179; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/usman/Projects/trequant/trequant-python/trequant/media; # your Django project's media files - amend as required
}
location /static {
alias /home/usman/Projects/trequant/trequant-python/trequant/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
######## proxy_pass http://127.0.0.1:8000;
######## proxy_set_header Host $host;
######## proxy_set_header X-Real-IP $remote_addr;
######## proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_pass django;
uwsgi_read_timeout 600s;
uwsgi_send_timeout 600s;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
So, any idea what I am doing wrong? Thank you in advance.
Supposedly setting post-buffering = 8192 in your uwsgi.ini file will fix this. I got this answer from a 2.5-yr-old answer here and it implies this fix is not the root cause. Hope it helps!
Another fix is to use a TCP socket instead of a unix socket in your conf files:
In uwsgi.ini, use something like socket = 127.0.0.1:8000 in the [uwsgi] section instead of:
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
In your nginx.conf file (btw in Ubuntu, I'm referring to /etc/nginx/conf.d/nginx.conf, NOT the one simply in /etc/nginx/) use uwsgi_pass 127.0.0.1:8000; instead of include uwsgi_params;
I've posted this as a separate answer because either answer may work, and I'm interested to see which answer helps others the most.
In my case this seemed to be for requests that would have given a 308 redirect. I think my Node backend was sending response before postdata was fully received. Updating the client to hit new endpoint (no redirect) may permanently fix my case. Seems promising.
Set higher body buffer size client_body_buffer_size 1M; This will fix.
References:
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
https://www.nginx.com/resources/wiki/start/topics/examples/full/

Nginx + Gunicorn - error pages for static resources

I am running a Python Flask application with Gunicorn and Nginx as a reverse proxy. Pages are served by Gunicorn and Nginx is serving files from my static folder directly.
It's working correctly except where I get a 404 on a static resources.
I have setup custom error handlers in Flask to show 'pretty' error pages on HTTP error codes. This is also working fine when I request a non-existent page.
However, when a static resource doesn't exist then nginx serves its own default 404 page instead of the Flask app's 404 page (which makes sense since it's bypassing Gunicorn). Is there a way to tell nginx to serve the Flask error handler page via Gunicorn if it encounters an error serving a static resource?
Here is my current nginx conf file for this server:
server {
listen 80;
server_name example.com;
access_log /home/aaron/dev/apwd-flask/logs/access.log;
error_log /home/aaron/dev/apwd-flask/logs/error.log;
location / {
include proxy_params;
proxy_pass http://localhost:8000;
}
location /static {
alias /home/aaron/dev/apwd-flask/app/static/;
}
}
I'm thinking (hoping) I can use an error_page directive to give control back to Gunicorn and tell it to serve the appropriate custom error handler, but haven't been able to figure out if that's possible or how to do it from the documentation.
Answering my own question as I was able to locate an answer to this after alot of searching so posting it here for the benefit of anyone else who may have the same issue. I expect this would work for any backend, not just gunicorn.
https://www.nginx.com/resources/admin-guide/serving-static-content/
In the section entitled 'Trying Several Options' the final example shows the solution to this problem, using the try_files directive in the static location block I can tell nginx to pass the request to a named location if it fails to find the requested file.
Here is my new nginx conf file which is working as expected now for non-existent static file requests:
server {
listen 80;
server_name example.com;
access_log /home/aaron/dev/apwd-flask/logs/access.log;
error_log /home/aaron/dev/apwd-flask/logs/error.log;
location #apwd_flask {
include proxy_params;
proxy_pass http://localhost:8000;
}
location / {
try_files $uri #apwd_flask;
}
location /static {
alias /home/aaron/dev/apwd-flask/app/static/;
try_files $uri #apwd_flask;
}
}
Now my location #apwd_flask is the gunicorn backend and when my static files aren't found by nginx serving directly, it sends the request to the backend which serves its own 404 response.
You need to change the owner of files in below directory
/home/aaron/dev/apwd-flask/app/static/
In order to give access to nginx user to read files in the static directory change the owner to www-data or change the owner group to www-data and give the read access to all files in this directory.
You can do this by running below command:
chown -R www-data:www-data /home/aaron/dev/apwd-flask/app/static/

Why my nginx(engine-x) is not listening my request.?

I have have created a small app and run as per the instructions given by the below url
http://agiliq.com/blog/2013/08/minimal-nginx-and-gunicorn-configuration-for-djang/
I have created a example file in the /iec/nginx/sites-enabled directory.And my example file contains the below code
server {
listen localhost:8000;
location / {
proxy_pass http://127.0.0.1:8001;
}
location /static/ {
autoindex on;
alias /home/mulagala/nginx_example/mysite/static/;
}
}
But when i tried to connect localhost:8000 i am getting error unable to connect.Why my nginx server is not listening my request. What mistake i am doing.Any help would be appriciated

Categories

Resources