How to find out why uWSGI kill workers? - python

i have app on Pyramid. I run it in uWSGI with these config:
[uwsgi]
socket = mysite:8055
master = true
processes = 4
vacuum = true
lazy-apps = true
gevent = 100
And nginx config:
server {
listen 8050;
include uwsgi_params;
location / {
uwsgi_pass mysite:8055;
}
}
Usually all fine, but sometimes uWSGI kills workers. And i have no idea why.
I see in uWSGI logs:
DAMN ! worker 2 (pid: 4247) died, killed by signal 9 :( trying respawn ...
Respawned uWSGI worker 2 (new pid: 4457)
but in the logs there is no Python exceptions.
sometimes i see in uWSGI logs:
invalid request block size: 11484 (max 4096)...skip
[uwsgi-http key: my site:8050 client_addr: 127.0.0.1 client_port: 63367] hr_instance_read(): Connection reset by peer [plugins/http/http.c line 614]
And nginx errors.log:
*13388 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1,
*13955 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:
I think this can be solved by adding buffer-size=32768, but it is unlikely due to this uWSGI kill workers.
Why uwsgi can kill workers? And how can I know the reason?
The line "DAMN ! worker 2 (pid: 4247) died, ..." nothing to tells.

signal 9 means it received a SIGKILL. so something sent a kill to your worker. it's relatively likely that the out-of-memory killer decided to kill your app because it was using too much memory. try watching the workers with a process monitor and see if it uses a lot of memory.

Try to add harakiri-verbose = true option in the uWSGI config.

I had the same problem, for me changing the uwsgi.ini file, changing the value of the reload-on-rss setting from 2048 to 4048, and harakiri to 600 solved the problem.

For me it was that I hadn't filled out app.config["SERVER_NAME"] = "x"

Related

HTTP server Nginx - Uwsgi - Flask throttling at high load

I am load testing my application. I have an EC2 server running Flask + Uwsgi + Nginx (Configured as per https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-20-04)
I tested with 4K records in 4 seconds. I can see a lot of errors like below.
2022/04/17 15:16:37 [error] 19929#19929: *7769 connect() to unix:/home/ubuntu/wip/iotlistener/iotlistener.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: XX.XX.XX.XX, server: , request: "POST / HTTP/1.1", upstream: "uwsgi://unix:/home/ubuntu/wip/iotlistener/iotlistener.sock:
I can see the EC2 server is quite stable, and the CPU load does not go beyond 50%. The network usage is high ofcourse, but no red lines. The service itself is very light - it just dumps data into DynamoDB. I can see the DB metrics are quite stable.
So I feel this is due to some default configuration that restricts the load. Can you please help me identify?
iotlistener.ini
[uwsgi]
module = wsgi:app
master = true
processes = 25
socket = iotlistener.sock
chmod-socket = 660
vacuum = true
die-on-term = true
The process count was 5. I changed it to 25 - with no change in behaviour.
And this is the nginx configuration:
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/ubuntu/wip/iotlistener/iotlistener.sock;
}
}
I am expecting a production load well beyond 1K records per second. So please help me with this!

AWS Elastic Beanstalk sqsd error while processing a message

I have an Elastic Beanstalk Python worker environment. The average job running time is about 20 seconds. Sometimes the following scenario happens,
sqsd picks a message from the sqs queue and sends it to the worker.
The worker starts processing the message.
in few seconds (ranges from 1 to 30 seconds) sqsd gets the following error and parks the message in the Dead letter queue as I configured the retries to 1.
127.0.0.1 (-) - - [23/Nov/2017:19:48:17 +0000] "POST / HTTP/1.1" 500 527 "-" "aws-sqsd/2.3"
The worker continues to process the message and finishes successfully. I have logs to trace that.
That makes the environment in general not healthy.
I have the connection timeout = 60 seconds, Inactivity timeout = 600, Visibility timeout = 600, HTTP connections = 2.
I have the following in the configs as well
option_settings:
aws:elasticbeanstalk:container:python:
NumProcesses: 3
NumThreads: 10
files:
"/etc/httpd/conf.d/wsgi_custom.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIApplicationGroup %{GLOBAL}
Is this because of some memory limit that wsgi puts to every request? That is the only thing that I can think of.

402 Bad Request using Python, Nginx, and flask on a Raspberry Pi

I am trying to get my Python application to run on port 80 so I can host my page to the Internet and see my temperature and all that remotely.
I get a 402 Bad Request error and I can't seem to figure out why. It seems it's having trouble writting my .sock file to a temp directory.
I am following this tutorial.
https://iotbytes.wordpress.com/python-flask-web-application-on-raspberry-pi-with-nginx-and-uwsgi/
/home/pi/sampleApp/sampleApp.py
from flask import Flask
first_app = Flask(__name__)
#first_app.route("/")
def first_function():
return "<html><body><h1 style='color:red'>I am hosted on Raspberry Pi !!!</h1></body></html>"
if __name__ == "__main__":
first_app.run(host='0.0.0.0')
/home/pi/sampleApp/uwsgi_config.ini
[uwsgi]
chdir = /home/pi/sampleApp
module = sample_app:first_app
master = true
processes = 1
threads = 2
uid = www-data
gid = www-data
socket = /tmp/sample_app.sock
chmod-socket = 664
vacuum = true
die-on-term = true
/etc/rc.local just before exit 0
/usr/local/bin/uwsgi --ini /home/pi/sampleApp/uwsgi_config.ini --uid www- data --gid www-data --daemonize /var/log/uwsgi.log
/etc/nginx/sites-available/sample_app_proxy and I verified this moved to sites-enabled after I linked it.
server {
listen 80;
server_name localhost;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass unix:/tmp/sample_app.sock;
}
}
I got all the way to the final step with 100 percent success. After I linked the sample_app_proxy file so it gets copied into /nginx/sites-enabled/ I do a service nginx restart. When I open my browser 'localhost' I get a 502 Bad Request.
I noticed in the nginx logs at the bottom that there was an error.
2017/01/29 14:49:08 [crit] 1883#0: *8 connect() to unix:///tmp/sample_app.sock failed (2: No such file or directory) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///tmp/sample_app.sock:", host: "localhost", referrer: "http://localhost/"
My source code is exactly as you see it in the tutorial, I checked it over many times.
I looked at the /etc/logs/uwsgi.log and found this message at the bottom.
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 7336
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
The -s/--socket option is missing and stdin is not a socket.
I am not sure what is going on and why it doesn't seem to write the .sock file to the /tmp/ directory. The test I did earlier in the tutorial worked fine and the sample_app.sock file showed up in /tmp/ But when I run the application it doesn't seem to work.
I did a lot of searching and I saw many posts saying to use "///" instead on "/" in the /etc/nginx/sites-available/sample_app_proxy file, but whether I use one or three, I still get the 502 error.
uwsgi_pass unix:///tmp/sample_app.sock;
Any help would be greatly appreciated as this is the last step I need to accomplish so I can do remote stuff to my home. Thanks!

uWSGI rare gateway timeout

Good day everybody!
We have a simple web application (REST service) written using good old web.py and hosted on Nginx + uWSGI. The issue is that once in a zillion requests it fails fails with "504 Gateway Timeout" on the exact same handler, which does nothing but simple SQL SELECT query on an empty database (SQLAlchemy + PostgreSQL), meaning query returns nothing. Here is uWSGI config:
uwsgi:
enable-threads: 1
threads: 1
workers: 2
master: 1
socket: :8001
plugin: python
chown-socket: nginx:nginx
pythonpath: /usr/lib/python2.6/site-packages/nailgun
touch-reload: /var/lib/nailgun-uwsgi
virtualenv: /usr
module: nailgun.wsgi
buffer-size: 49180
listen: 4096
pidfile: /tmp/nailgun.pid
logto: /var/log/nailgun/app.log
mule: 1
lazy: true
shared-pyimport: /usr/lib/python2.6/site-packages/nailgun/utils/mule.py
And here is the line from Python application log:
GET /api/nodes/ => generated 2 bytes in 10199124 msecs
uWSGI version is 2.0.3.
Can anyone please give a hint?

uWSGI keepalive

Is it possible somehow to pass through the keepalive limitation of uwsgi? If not, what is the best way of persistent connection implementation. I'm using NGiNX + uWSGI (Python), and I want clients to have asynchronous updates from server.
UWSGI supports keep-alive via --http-keepalive option if you receive requests via http.
/tmp$ cat app.py
def application(env, start_response):
content = b"Hello World"
start_response('200 OK', [
('Content-Type','text/html'),
('Content-Length', str(len(content))),
])
return [content]
Run with:
/tmp$ uwsgi --http=:8000 --http-keepalive -w app &> /dev/null
And we can see connect calls via strace:
~$ strace -econnect wrk -d 10 -t 1 -c 1 http://127.0.0.1:8000
connect(3, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
Running 10s test # http://127.0.0.1:8000
1 threads and 1 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 92.32us 56.14us 2.81ms 97.20%
Req/Sec 11.10k 389.34 11.84k 68.32%
111505 requests in 10.10s, 7.98MB read
Requests/sec: 11040.50
Transfer/sec: 808.63KB
+++ exited with 0 +++
See? Only one connection.
You are talking about two different things. If you want persistent connections from your clients to your app, you may want to use async modes (via ugreen, gevent...). In that way you will be able to maintain thousand of concurrent connections. Keepalive is a way to route multiple requests to the same connections, but this is pretty useless for your purpose. Instead If you are referring to persistent connections between nginx and uWSGI, there is no way (currently) in nginx to reach such behaviour. You may want to follow this ticket:
http://projects.unbit.it/uwsgi/ticket/66
it is about the fastrouter, but will be applied in httprouter too. But it is still under heavy development.

Categories

Resources