I am load testing my application. I have an EC2 server running Flask + Uwsgi + Nginx (Configured as per https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-20-04)
I tested with 4K records in 4 seconds. I can see a lot of errors like below.
2022/04/17 15:16:37 [error] 19929#19929: *7769 connect() to unix:/home/ubuntu/wip/iotlistener/iotlistener.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: XX.XX.XX.XX, server: , request: "POST / HTTP/1.1", upstream: "uwsgi://unix:/home/ubuntu/wip/iotlistener/iotlistener.sock:
I can see the EC2 server is quite stable, and the CPU load does not go beyond 50%. The network usage is high ofcourse, but no red lines. The service itself is very light - it just dumps data into DynamoDB. I can see the DB metrics are quite stable.
So I feel this is due to some default configuration that restricts the load. Can you please help me identify?
iotlistener.ini
[uwsgi]
module = wsgi:app
master = true
processes = 25
socket = iotlistener.sock
chmod-socket = 660
vacuum = true
die-on-term = true
The process count was 5. I changed it to 25 - with no change in behaviour.
And this is the nginx configuration:
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/ubuntu/wip/iotlistener/iotlistener.sock;
}
}
I am expecting a production load well beyond 1K records per second. So please help me with this!
Related
I have a Flask back end that is functional without using uwsgi and nginx.
I'm trying to deploy it on an EC2 instance with its front-end.
No matter what I do, I can't reach the back-end. I opened all the ports for testing purposes but that does not help.
Here's my uwsgi ini file:
[uwsgi]
module = main
callable = app
master = true
processes = 1
socket = 0.0.0.0:5000
vacuum = true
die-on-term = true
Then I use this command to start the app:
uwsgi --ini uwsgi.ini
The message returned is
WSGI app 0 (mountpoint='') ready in 9 seconds.
spawned uWSGI worker 1 (and the only) PID: xxxx
Then here is my Nginx conf file:
server {
server_name my_name.com www.ny_name.com
location / {
root /home/ubuntu/front_end/dist/;
}
location /gan {
proxy_pass https:localhost:5000/gan;
}
## below https conf by certbot
}
If my understanding is correct, whenever a request reaches "my_name.com/gan..." it will be redirected to the localhost on the port 5000 where the back-end is started by uwsgi.
But I can't reach it. I'm trying to simply do a get request on "my_name.com/gan" on my browser (it should return a random image) but I get a 502 by nginx.
Important to note, the front-end works fine and I can access it on browser.
My guess is that url is not in proper form
Try
proxy_pass http://0.0.0.0:5000;
I'm involved in optimizing a Flask web application deployed using uWSGI through NGINX reverse proxy. The response time in uWSGI web-server logs seems fast for my application (6ms in avg) but I have HTTP 499 errors in 10 out of 1'000 requests and HTTP 502 errors in 4 out of 10'000 requests (The application have around two hundreds requests per second).
According to this thread its seems typical to have 499 errors because the client closed connection (Question 1: Is it really typical in real applications?). how about 502 errors? is it typical in real applications?(Question 2)
I guessed that requests queued in NGINX and waiting for Python response so they exceeded NGINX time limit. But seems it's not a true because everytime I test server using Google Chrome network analyzer I got the response (200) fast using a usual network bandwidth (60kbps). I test this by scripting load test.
uWSGI Config:
[uwsgi]
http = 0.0.0.0:7000
wsgi-file = app.py
callable = app
processes = 12
lazy = true
lazy-apps = true
logto = /root/logs/data-gathering.log
NGINX Config(selected parts):
worker_connections 10000;
keepalive_timeout 65;
worker_rlimit_nofile 30000;
worker_processes auto;
The server load average is around 1.4 (with 4 CPU Core) and 70% of memory is still free.
I'm trying to migrate django app from ubuntu 14.04 to raspberry pi ( raspbian os)
for ubuntu i have done http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html and it worked.
in raspbian it's not so simple.
this is my bills_nginx.conf in /etc/nginx/sites-enabled
bills_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:/var/www/html/bills/bills/bills.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 192.168.5.5; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /var/www/html/bills/bills/bills/media; # your Django project's media files - amend as required
}
location /static {
alias /var/www/html/bills/bills/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /var/www/html/bills/bills/uwsgi_params; # the uwsgi_params file you installed
}
}
and this is my UWSGI INI file:
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /var/www/html/bills/bills
# Django's wsgi file
module = bills.wsgi
# the virtualenv (full path)
home = /home/seb/.virtualenvs/bills3
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 10
# the socket (use the full path to be safe
socket = /var/www/html/bills/bills/bills.sock
# ... with appropriate permissions - may be needed
uid =www-data
gid=www-data
chown-socket=www-data:www-data
chmod-socket = 666
# clear environment on exit
vacuum = true
daemonize=/var/log/uwsgi/bills3.log
error_log=/var/log/nginx/bills3_error.log
in error.log I get:
2017/03/08 10:27:43 [error] 654#0: *1 connect() to unix:/var/www/html/bills/bills/bills.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.5.2, server: 192.168.5.5, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:/var/www/html/bills/bills/bills.sock:", host: "192.168.5.5:8000"
please help me get it working :)
chmod-socket, chown-socket, gid, uid, socket For uWSGI and nginx to communicate over a socket, you need to specify the permissions and the owner of the socket. 777 as chmod-socket is much too liberal for production. However, you may have to mess around with this number to get it correct, so everything necessary can communicate. If you don’t take care of your socket configurations, you will get errors such as:
So make sure the permission of the folder ..
I think better way
$ sudo mkdir /var/uwsgi
$ sudo chown www-data:www-data /var/uwsgi
And change the socket path
upstream django {
server unix:/var/uwsgi/bills.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first) }
More reference : Pls check A great article
http://monicalent.com/blog/2013/12/06/set-up-nginx-and-uwsgi/
Also I have the same issue before may you can check my configuration too
nginx django uwsgi page not found error
I am trying to get my Python application to run on port 80 so I can host my page to the Internet and see my temperature and all that remotely.
I get a 402 Bad Request error and I can't seem to figure out why. It seems it's having trouble writting my .sock file to a temp directory.
I am following this tutorial.
https://iotbytes.wordpress.com/python-flask-web-application-on-raspberry-pi-with-nginx-and-uwsgi/
/home/pi/sampleApp/sampleApp.py
from flask import Flask
first_app = Flask(__name__)
#first_app.route("/")
def first_function():
return "<html><body><h1 style='color:red'>I am hosted on Raspberry Pi !!!</h1></body></html>"
if __name__ == "__main__":
first_app.run(host='0.0.0.0')
/home/pi/sampleApp/uwsgi_config.ini
[uwsgi]
chdir = /home/pi/sampleApp
module = sample_app:first_app
master = true
processes = 1
threads = 2
uid = www-data
gid = www-data
socket = /tmp/sample_app.sock
chmod-socket = 664
vacuum = true
die-on-term = true
/etc/rc.local just before exit 0
/usr/local/bin/uwsgi --ini /home/pi/sampleApp/uwsgi_config.ini --uid www- data --gid www-data --daemonize /var/log/uwsgi.log
/etc/nginx/sites-available/sample_app_proxy and I verified this moved to sites-enabled after I linked it.
server {
listen 80;
server_name localhost;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass unix:/tmp/sample_app.sock;
}
}
I got all the way to the final step with 100 percent success. After I linked the sample_app_proxy file so it gets copied into /nginx/sites-enabled/ I do a service nginx restart. When I open my browser 'localhost' I get a 502 Bad Request.
I noticed in the nginx logs at the bottom that there was an error.
2017/01/29 14:49:08 [crit] 1883#0: *8 connect() to unix:///tmp/sample_app.sock failed (2: No such file or directory) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///tmp/sample_app.sock:", host: "localhost", referrer: "http://localhost/"
My source code is exactly as you see it in the tutorial, I checked it over many times.
I looked at the /etc/logs/uwsgi.log and found this message at the bottom.
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 7336
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
The -s/--socket option is missing and stdin is not a socket.
I am not sure what is going on and why it doesn't seem to write the .sock file to the /tmp/ directory. The test I did earlier in the tutorial worked fine and the sample_app.sock file showed up in /tmp/ But when I run the application it doesn't seem to work.
I did a lot of searching and I saw many posts saying to use "///" instead on "/" in the /etc/nginx/sites-available/sample_app_proxy file, but whether I use one or three, I still get the 502 error.
uwsgi_pass unix:///tmp/sample_app.sock;
Any help would be greatly appreciated as this is the last step I need to accomplish so I can do remote stuff to my home. Thanks!
i have app on Pyramid. I run it in uWSGI with these config:
[uwsgi]
socket = mysite:8055
master = true
processes = 4
vacuum = true
lazy-apps = true
gevent = 100
And nginx config:
server {
listen 8050;
include uwsgi_params;
location / {
uwsgi_pass mysite:8055;
}
}
Usually all fine, but sometimes uWSGI kills workers. And i have no idea why.
I see in uWSGI logs:
DAMN ! worker 2 (pid: 4247) died, killed by signal 9 :( trying respawn ...
Respawned uWSGI worker 2 (new pid: 4457)
but in the logs there is no Python exceptions.
sometimes i see in uWSGI logs:
invalid request block size: 11484 (max 4096)...skip
[uwsgi-http key: my site:8050 client_addr: 127.0.0.1 client_port: 63367] hr_instance_read(): Connection reset by peer [plugins/http/http.c line 614]
And nginx errors.log:
*13388 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1,
*13955 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:
I think this can be solved by adding buffer-size=32768, but it is unlikely due to this uWSGI kill workers.
Why uwsgi can kill workers? And how can I know the reason?
The line "DAMN ! worker 2 (pid: 4247) died, ..." nothing to tells.
signal 9 means it received a SIGKILL. so something sent a kill to your worker. it's relatively likely that the out-of-memory killer decided to kill your app because it was using too much memory. try watching the workers with a process monitor and see if it uses a lot of memory.
Try to add harakiri-verbose = true option in the uWSGI config.
I had the same problem, for me changing the uwsgi.ini file, changing the value of the reload-on-rss setting from 2048 to 4048, and harakiri to 600 solved the problem.
For me it was that I hadn't filled out app.config["SERVER_NAME"] = "x"