Nginx+uWSGI Error : request canceled (Client.Timeout exceeded while awaiting headers) - python

I am using nginx+uwsgi combination for a Post API.
API end point is expected to receive 3000 reqeust/10 seconds
Source Machines are not able to POST requests sometimes with error
net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I have following settings on nginx
location / {
proxy_pass http://127.0.0.1:5000;
uwsgi_read_timeout 300s;
include uwsgi_params;
}
and
http {
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 30;
keepalive_requests 100000;
reset_timedout_connection on;
client_body_timeout 10;
send_timeout 2;
types_hash_max_size 2048;
client_max_body_size 20M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
}
events {
worker_connections 1024;
use epoll;
multi_accept on;
}

Related

How to fix 502 Bad Gateway Error in production(Nginx)?

When I tried to upload a big csv file of size about 600MB in my project which is hosted in the digital ocean, it tries to upload but shows 502 Bad Gateway Error (Nginx).
The application is a data conversion application.
This works fine while working locally.
sudo tail -30 /var/log/nginx/error.log
shows
[error] 132235#132235: *239 upstream prematurely closed connection while reading response header from upstream, client: client's ip , server: ip, request: "POST /submit/ HTTP/1.1", upstream: "http://unix:/run/gunicorn.sock:/submit/", host: "ip", referrer: "http://ip/"
sudo nano /etc/nginx/sites-available/myproject
shows
server {
listen 80;
server_name ip;
client_max_body_size 999M;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias /root/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
nginx.conf
user root;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
I have also the javascript loader running while the conversion process takes place.
How can I fix this?
If you are using django 3.1 or higher you can make your file processing asynchronous this way and return a response to the user while the file conversion takes place.
Your view should look something like this...
import asyncio
from django.http import JsonResponse
from asgiref.sync import sync_to_async
#sync_to_async
def crunching_stuff(my_file):
# do your conversion here
async def upload(request):
json_payload = {
"message": "Your file is being converted"
}
my_file = request.POST.get('file')
asyncio.create_task(crunching_stuff(my_file))
return JsonResponse(json_payload)
On the front end if you use Dropzone.js your user can see the file upload progress and will get a response quicker. this is a better user experience.
https://www.dropzonejs.com/
This error can indicate multiple problems. The fact it works for you locally strengthen the probability the issue relies on the nginx side.
You can try to solve it by increasing the timeout thresholds (as suggested here), and the buffers size. Add this to your server's nginx.conf:
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
502 error can be of anything.
Check your Nginx error log as following
tail /var/log/nginx/error.log -f
In my case it is because the header is too big. So one had to increase the buffer size in /etc/nginx/sites-enabled/default as Chen.A had described.

Nginx : why 400 bad request happend?

I'm doing build my own django server with nginx and uwsgi.
i almost done but 400 Bad Request always appear...
what am i wrong?
[info] 61794#0: *35 client sent invalid method while reading client request line, client: 127.0.0.1, server: localhost, request: "
QUERY_STRINGREQUEST_METHODGET
CONTENT_TYPECONTENT_LENGTH
REQUEST_URI/ PATH_INFO/"
this is error.log(info)
and this is .conf file
upstream django {
#server localhost:9001;
server unix://~/Desktop/fido_virtual/fido.sock;
}
server {
listen 8999;
server_name localhost;
charset utf-8;
client_max_body_size 75M;
location /media {
alias ~/Desktop/fido_virtual/media/;
}
location /static {
alias ~/Desktop/fido_virtual/fidoproject/staticfiles/;
}
location / {
uwsgi_pass django;
include /usr/local/etc/nginx/uwsgi_params;
}
}
this .conf in project folder
upstream django {
#server localhost:9001;
server unix:///Users/junbeomkwak/Desktop/fido_virtual/fido.sock;
}
server {
listen 8999;
server_name localhost;
charset utf-8;
client_max_body_size 75M;
location /media {
alias /Users/junbeomkwak/Desktop/fido_virtual/media/;
}
location /static {
alias /Users/junbeomkwak/Desktop/fido_virtual/fidoproject/staticfiles;
}
location / {
uwsgi_pass django;
#include /Users/junbeomkwak/Desktop/fido_virtual/fidoproject/uwsgi_params;
include /usr/local/etc/nginx/uwsgi_params;
}
}
this file is in /usr/local/etc/nginx/site-enabled/.conf
#user nobody;
worker_processes 1;
error_log /var/log/error.log;
#error_log logs/error.log notice;
error_log /var/log/errorngnix.log info;
events {
worker_connections 1024;
}
http {
large_client_header_buffers 4 16k;
include mime.types;
include /usr/local/etc/nginx/sites-enabled/*;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 8080;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
/ssl
⚡️  /usr/local/etc/nginx  cd ..
#user nobody;
worker_processes 1;
error_log /var/log/error.log;
#error_log logs/error.log notice;
error_log /var/log/errorngnix.log info;
events {
worker_connections 1024;
}
http {
large_client_header_buffers 4 16k;
include mime.types;
include /usr/local/etc/nginx/sites-enabled/*;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 8080;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
and this is /usr/local/etc/nginx/nginx.conf
what is my mistake?... help me

Nginx : Sub-domain config file

I have subdomain, Currently its showing an index.html page containing just the name of the domain etc.
That index.html page is loading from the /home/admin/web/****.******.com/public_html.
What i cannot find is the config file which is pointing to that directory.
I checked twice /etc/nginx/conf.d it have nothing which seems to be pointing to that page.
I am using centOs with nginx.
/etc/nginx/nginx.conf :
# Server globals
user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
error_log /var/log/nginx/error.log crit;
pid /var/run/nginx.pid;
# Worker config
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 1m;
client_body_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 256m;
large_client_header_buffers 4 8k;
send_timeout 30;
keepalive_timeout 60 60;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Log format
log_format main '$remote_addr - $remote_user [$time_local] $request '
'"$status" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format bytes '$body_bytes_sent';
#access_log /var/log/nginx/access.log main;
access_log off;
# Mime settings
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Compression
gzip on;
gzip_comp_level 9;
gzip_min_length 512;
gzip_buffers 8 64k;
gzip_types text/plain text/css text/javascript text/js text/xml application/json application/javascript application/x-javascript application/xml application/xml+r$
gzip_proxied any;
gzip_disable "MSIE [1-6]\.";
# Proxy settings
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
# Cloudflare https://www.cloudflare.com/ips
set_real_ip_from 199.27.128.0/21;
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/12;
set_real_ip_from 172.64.0.0/13;
#set_real_ip_from 2400:cb00::/32;
#set_real_ip_from 2606:4700::/32;
#set_real_ip_from 2803:f800::/32;
#set_real_ip_from 2405:b500::/32;
#set_real_ip_from 2405:8100::/32;
real_ip_header CF-Connecting-IP;
# SSL PCI Compliance
ssl_session_cache shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SH$
# Error pages
error_page 403 /error/403.html;
error_page 404 /error/404.html;
error_page 502 503 504 /error/50x.html;
# Cache settings
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=1024m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 1d;
# Cache bypass
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
# File cache settings
open_file_cache max=10000 inactive=30s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors off;
proxy_cache_valid any 1d;
# Cache bypass
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
# Wildcard include
include /etc/nginx/conf.d/*.conf;
}
If you check the configuration file you'll see the following:
include /etc/nginx/sites-enabled/*;
This means that additional vhost config files are being loaded from "/etc/nginx/sites-enabled/".
The correct way to use this is to have the config files in /etc/nginx/sites-available/ and create symlinks to them in /etc/nginx/sites-enabled/.

Install NGINX with SSL por443 over BottlePy port 80

I have an web-service created in BottlePy. Now it is on production.
But only serve to port 80.
I need an HTTPs solution, and I read that NGINX is a good solution.
But is it possible install NGINX without change BottlePy code?
Is it possible serve by port 80 in BottlePy and by port 443 in NGINX?
Some one can help me please?
Thanks
EDIT:
I solve the problem:
/etc/nginx/nginx.conf
:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 443 default_server;
listen [::]:443 default_server;
server_name _;
root /usr/share/nginx/html;
ssl on;
ssl_certificate /xxx.crt;
ssl_certificate_key /xxx.key;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://localhost:80;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}

Unable to deploy Tornado with nginx reverse proxy, no errors are displayed

I am trying to deploy my Tornado app with Nginx as proxy. I am trying to run this on Ramnode VPS (512 CVZ) whose configuration is:
512MB RAM
512MB VSwap
2 CPU Core Access
120GB SSD-Cached HDD Space
1Gbps Port
I am not using supervisor as of now and I manually started four instances of my tornado process:
sudo python /home/magneto/pricechase/main.py --port=8000 &
sudo python /home/magneto/pricechase/main.py --port=8001 &
sudo python /home/magneto/pricechase/main.py --port=8002 &
sudo python /home/magneto/pricechase/main.py --port=8003 &
and I can access the site now at pricechase.in:8000 to pricechase.in:8003
I created a new user nginx and gave permissions to my project directory:
sudo adduser --system --no-create-home --disabled-login --disabled-password --group nginx
sudo chown -R nginx:nginx /home/magneto/pricechase/
Following is the conf file for my project, located at /etc/nginx/sites-enabled/pricechase.in
user nginx;
worker_processes 5;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http{
proxy_next_upstream error;
upstream tornadoes {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
keepalive_timeout 65;
proxy_read_timeout 200;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
gzip on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/html text/css text/xml
application/x-javascript application/xml
application/atom+xml text/javascript;
server {
listen 80;
server_name pricechase.in www.pricechase.in;
location /static/ {
root /home/magneto/pricechase/static;
if ($query_string) {
expires max;
}
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://tornadoes;
}
}
}
When I tried to restart nginx service I was getting following error:
Restarting nginx: nginx: [emerg] unknown directive "user" in /etc/nginx/sites-enabled/pricechase.in:1
nginx: configuration file /etc/nginx/nginx.conf test failed
As the answers indicated here, I commented the following line in /etc/nginx/nginx.conf:
# include /etc/nginx/sites-enabled/*;
Now I can start/restart it however when I type in pricechase.in, its not opening the site. Its not able to access static files also, ex: http://pricechase.in/static/css/tooltipster.css which is located at /home/magneto/pricechase/static/css/tooltipster.css
Following is the contents of /etc/nginx/nginx.conf:
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
# include /etc/nginx/sites-enabled/pricechase.in;
}
How do I debug this?
Why its not logging any errors? I checked access and error logs (located in /var/log/nginx/)
Any tips/improvements on nginx config files?
If I understand correctly /etc/nginx/nginx.conf is like a base template and all the common config I require for any other hosts should be included in this, right?
If I want to add another domain and serve different tornado instances, I guess I have to add new conf in sites-enabled, however will there be any conflict? for example, I want http://abc.xyz.com to serve static content located in /home/magneto/blog and http://pqrs.com to serve tornado process of ports 8005 to 8008.
A config fragment in sites-enabled should only include the part inside the "http" block; the other parts can only appear at the top-level nginx.conf.

Categories

Resources