I am running Django, uwsgi, ngix server.
My server works fine for GET, POST requests of smaller size. But when POSTing requests of large size, nginx returns 502:
nginx error.log is:
2016/03/01 13:52:19 [error] 29837#0: *1482 sendfile() failed (32: Broken pipe) while sending request to upstream, client: 175.110.112.36, server: server-ip, request: "POST /me/amptemp/ HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock:", host: "servername"
So, in order to find where the real problem is, I ran uwsgi on a different port and checked if any error occurs with the same request. But the request was successful. So, the problem is with nginx or unix socket configuration.
Ngin-x configuration:
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/uwsgi.sock; # for a file socket
# server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 52.25.29.179; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/usman/Projects/trequant/trequant-python/trequant/media; # your Django project's media files - amend as required
}
location /static {
alias /home/usman/Projects/trequant/trequant-python/trequant/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
######## proxy_pass http://127.0.0.1:8000;
######## proxy_set_header Host $host;
######## proxy_set_header X-Real-IP $remote_addr;
######## proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_pass django;
uwsgi_read_timeout 600s;
uwsgi_send_timeout 600s;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
So, any idea what I am doing wrong? Thank you in advance.
Supposedly setting post-buffering = 8192 in your uwsgi.ini file will fix this. I got this answer from a 2.5-yr-old answer here and it implies this fix is not the root cause. Hope it helps!
Another fix is to use a TCP socket instead of a unix socket in your conf files:
In uwsgi.ini, use something like socket = 127.0.0.1:8000 in the [uwsgi] section instead of:
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
In your nginx.conf file (btw in Ubuntu, I'm referring to /etc/nginx/conf.d/nginx.conf, NOT the one simply in /etc/nginx/) use uwsgi_pass 127.0.0.1:8000; instead of include uwsgi_params;
I've posted this as a separate answer because either answer may work, and I'm interested to see which answer helps others the most.
In my case this seemed to be for requests that would have given a 308 redirect. I think my Node backend was sending response before postdata was fully received. Updating the client to hit new endpoint (no redirect) may permanently fix my case. Seems promising.
Set higher body buffer size client_body_buffer_size 1M; This will fix.
References:
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
https://www.nginx.com/resources/wiki/start/topics/examples/full/
Related
I'm new to python and have been put on a task of building out a spreadsheet parser. I've created a python script that reads an xlsx file and parses the data. I have an Nginx server set up that this will be hosted on. I need this script to be an API endpoint so I can pass the parsed data back as JSON. I have been reading about WSGI for production server and have tried to follow the route of building that out. I am able to serve a path on the server and have it output the wsgi python script. The script has the following:
def application(environ, start_response):
status = '200 OK'
html = '<html>\n' \
'<body>\n' \
' Hooray, mod_wsgi is working\n' \
'</body>\n' \
'</html>\n'
response_header = [('Content-type','text/html')]
start_response(status, response_header)
return [html]
I'm a little confused as to how to receive a request and send back json with my excel parser class? Thanks and I hope I'm being clear. I do have a flask server that works, but I do not know how to have it constantly running to serve my endpoint:
app = Flask(__name__)
#app.route('/parser/direct_energy', methods=['GET'])
def get_data():
return jsonify(commissions_data)
if name == 'main':
app.run(host='0.0.0.0')
You don't want to use raw WSGI for this.
Use a package such as FastAPI (or Flask) to make everything easier for you.
For instance, using FastAPI, an app with an endpoint to receive a binary (Excel) file and return a JSON response is approximately
from fastapi import FastAPI, File, UploadFile
app = FastAPI()
#app.post("/process")
def process_file(file: UploadFile = File()):
response = my_data_processing_function(data)
return {"response": response}
See:
To get going: https://fastapi.tiangolo.com/tutorial/
To process files: https://fastapi.tiangolo.com/tutorial/request-files/
To deploy your service (behind Nginx): https://fastapi.tiangolo.com/deployment/
I use python/flask for development & gunicorn for production.
To get it to accept HTTP requests, I use function decorators. Its the most common way.
#application.route('/epp/api/v1.0/request', methods=['POST'])
def eppJSON():
if flask.request.json is None:
return abort(400, "No JSON data was POSTed")
return jsonRequest(flask.request.json, flask.request.remote_addr)
So here, the url /epp/api/v1.0/request accepts POSTed JSON and returns JSON
When you run flask in dev mode it listens on http://127.0.0.1:5000
https://github.com/james-stevens/epp-restapi/blob/master/epprest.py
https://github.com/james-stevens/dnsflsk/blob/master/dnsflsk.py
These are both python/flask projects of mine. Feel free to copy. They each run multiple instances of the python code in a single container load-balanced by nginx - pretty neat combination.
UPDATE
I got things working throug NGinx, flask, and GUnicorn. However, my flask app is only working when I go to '/'. If I go to a route such as /parser/de/v1 I get a 404 Not Found.
Here is my setup for NGinx:
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html/excel_parser;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name 208.97.141.147;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://127.0.0.1:5000;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
my nginx.conf looks slightly different, partly becuase I am running multiple WSGI instances, then getting nginx to load-balance over them
worker_processes 3;
events {
worker_connections 1024;
}
user daemon;
http {
access_log off;
error_log stderr error;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream dns_servers {
server unix:/ram/dnsflsk_1.sock;
server unix:/ram/dnsflsk_2.sock;
server unix:/ram/dnsflsk_3.sock;
}
server {
listen 800 ssl;
server_name localhost;
ssl_certificate certkey.pem;
ssl_certificate_key certkey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://dns_servers;
}
}
}
But with this, all the URLs are passed to the python/wsgi
I have applied a domain(for example example.com) from godaddy.com. And I run the command to create Let's Encrypt, but there is error.
(venv) ubuntu2#212.../microblog$ wget https://dl.eff.org/certbot-auto
(venv) ubuntu2#212.../microblog$ chmod a+x ./certbot-auto
(venv) ubuntu2#212...~/microblog$ ../certbot-auto certonly --webroot -w /home/ubuntu2/microblog -d example.com --email example#aa.com
But there is error as following:
Requesting to rerun ./certbot-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for example.com
Using the webroot path /home/ubuntu2/microblog for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. example.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://example.com/.well-known/acme-challenge/V9B6Dz7gPx7RhyLmpYIlwYUhs1d4rWJF2HlpJbNbjbY: "<!DOCTYPE html><body style="padding:0; margin:0;"><html><body><iframe src="http://mcc.godaddy.com/park/MaO2MaO2LKWaYaOvrt==/fe/M"
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: example.com
Type: unauthorized
Detail: Invalid response from
http://example.com/.well-known/acme-challenge/V9B6Dz7gPx7RhyLmpYIlwYUhs1d4rWJF2HlpJbNbjbY:
"<!DOCTYPE html><body style="padding:0;
margin:0;"><html><body><iframe
src="http://mcc.godaddy.com/park/MaO2MaO2LKWaYaOvrt==/fe/M"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
and the /etc/nginx/sites-enabled/microblog as following
server {
# listen on port 80 (http)
listen 80;
server_name example.com;
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443 ssl;
server_name example.com;
# location of the self-signed SSL certificate
#ssl_certificate /home/ubuntu/microblog2/certs/cert.pem;
#ssl_certificate_key /home/ubuntu/microblog2/certs/key.pem;
# write access and error logs to /var/log
access_log /var/log/microblog_access.log;
error_log /var/log/microblog_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://127.0.0.1:8000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
# handle static files directly, without forwarding to the application
alias /home/ubuntu2/microblog/static;
expires 30d;
}
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /home/ubuntu2/microblog/;
}
location = /.well-known/acme-challenge/ {
return 404;
}
}
I don't know where is wrong, could you help me solve this issue, thanks!
It appears from the error message that example.com is not pointing to your Nginx server address yet. Notice this in the error message:
Detail: Invalid response from
http://example.com/.well-known/acme-challenge/V9B6Dz7gPx7RhyLmpYIlwYUhs1d4rWJF2HlpJbNbjbY:
"<!DOCTYPE html><body style="padding:0;
margin:0;"><html><body><iframe
src="http://mcc.godaddy.com/park/MaO2MaO2LKWaYaOvrt==/fe/M" <<<<<<<<<<<<<<<
The godaddy.com/park part likely means that the domain is parked at GoDaddy, and is not reaching your server yet.
The error message provide is also useful to understand the problem:
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
Check DNS settings, and try again. Also note - it could be DNS propagation time related, it the DNS change was recent. In that case you need to wait for the change to be propagated, up to 24 hours.
Ps You can confirm that by running curl example.com on the command line. Make sure it returns the home page of your server. When it does, try again.
I'm having trouble setting up my site with https. At the moment, I have my nginx server set to listen to both http and https responses.
However, now I only want to allow https and redirect any http requests to htpps.
I tried this post without any luck: How to deploy an HTTPS-only site, with Django/nginx?
What is the recommended way of doing this in Django 1.7+?
Below is my ngninx.conf file:
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:///uwsgi-tutorial/mysite/mysite.sock; # for a file socket
# server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
listen 443 default_server ssl;
#ssl on;
ssl_certificate /uwsgi-tutorial/conf/www.example.com.chained.crt;
ssl_certificate_key /uwsgi-tutorial/conf/www.example.com.key;
# the domain name it will serve for
# server_name localhost; # substitute your machine's IP address or FQDN
server_name example.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /uwsgi-tutorial/mysite/media; # your Django project's media files - amend as required
}
location /static {
alias /uwsgi-tutorial/mysite/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /uwsgi-tutorial/mysite/uwsgi_params; # the uwsgi_params file you installed
}
}
There is already a library that does this job just fine - sslify:
https://github.com/rdegges/django-sslify
Just proceed with instructions on github page.
I am using nginx as a proxy server for a Django app using gunicorn. The Django app is binded to http://127.0.0.1:8000. And here's my nginx setup from etc/nginx/sites-enabled/parkitbackend:
server {
server_name AA.BB.CC.DD;
access_log off;
location /static/ {
autoindex on;
alias /home/zihe/parkitbackend/parkitbackend/common-static/;
}
location / {
proxy_pass http://127.0.0.1:8000;
}
}
I am using python requests module:
requests.post("http://AA.BB.CC.DD/dashboard/checkin/", data=unicode(json.dumps(payload), "utf8"))
to post JSON objects to my django app called dashboard, where I have a function in dashboard/views.py called checkin to process the JSON object.
I did not receive any errors from running JSON posting script. However, Nginx does not seem to be able to pass the request to gunicorn binded at 127.0.0.1:8000. What should I do so I can use Nginx to pass the JSON to my django app? Thank you!
Additional notes:
I am very sure JSON posting code and my django app work properly since I tested it by binding Django app to http://AA.BB.CC.DD:8000 and ran this code in python:
requests.post("http://AA.BB.CC.DD:8000/dashboard/checkin/", data=unicode(json.dumps(payload), "utf8"))
and my django app received the JSON as expected.
I checked the error.log located at /var/log/nginx/. It turns out that the JSON I was sending was too large and was giving this error:
[error] 3450#0: *9 client intended to send too large body: 1243811 bytes, client: 127.0.0.1, server: _, request: "POST /dashboard/checkin/ HTTP/1.1", host: "127.0.0.1"
After reading up on this link: http://gunicorn-docs.readthedocs.org/en/19.3/deploy.html#nginx-configuration
I reduced the size of the JSON and modified etc/nginx/sites-enabled/parkitbackend to be like this:
upstream app_server {
server 127.0.0.1:8000;
}
server {
listen AA.BB.CC.DD:80;
server_name = _;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
location /static/ {
autoindex on;
alias /home/username/parkitbackend/parkitbackend/common-static/;
}
}
and replaced this line in /etc/nginx/nginx.conf:
include /etc/nginx/sites-enabled/*;
with this:
include /etc/nginx/sites-enabled/parkitbackend;
And the problem is resolved.
I am running Ubuntu 10.04, Django 1.3, Nginx 0.8.54 and uWSGI 0.9.7.
Both Nginx and uWSGI load without error. However, when you access my site, it sits for a LONG time and then eventually loads a "504 Gateway Time-out" error.
Here is my Nginx Virtual Host conf file:
server {
listen 80;
server_name www.mysite.com mysite.com;
error_log /home/mysite/log/error.log;
access_log /home/mysite/log/access.log;
location / {
auth_basic "Restricted";
auth_basic_user_file /home/mysite/public/passwd;
include uwsgi_params;
uwsgi_pass unix:///home/mysite/public/myapp.sock;
}
location /media {
alias /home/mysite/public/myapp/media;
}
error_page 401 /coming_soon.html;
location /coming_soon.html {
root /home/mysite/public/error_pages/401;
}
location /401/images {
alias /home/mysite/public/error_pages/401/images;
}
location /401/style {
alias /home/mysite/public/error_pages/401/style;
}
}
My site log shows this:
SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request / !!!
My error log show this:
upstream timed out (110: Connection timed out) while reading response header from upstream
I have two other sites on this server with the same configuration and they load PERFECTLY.
Has anyone else encountered this problem? There are several threads on here that are similar to my issue and I've tried several of those solutions but nothing seems to work.
Thank you in advance for your help!
That error is produced when requests exceed the NGINX uwsgi_read_timeout setting. After NGINX exceeds this limit it closes the socket and then uWSGI tries to write to the closed socket, producing the error that you see from uWSIG.
Make sure your NGINX timeouts are at least as high as uWSGI timeouts (HARAKIRI_TIMEOUT).
unix:///home/mysite/public/myapp.sock;
syntax not correct, use like this:
unix:/home/mysite/public/myapp.sock;