Good day everybody!
We have a simple web application (REST service) written using good old web.py and hosted on Nginx + uWSGI. The issue is that once in a zillion requests it fails fails with "504 Gateway Timeout" on the exact same handler, which does nothing but simple SQL SELECT query on an empty database (SQLAlchemy + PostgreSQL), meaning query returns nothing. Here is uWSGI config:
uwsgi:
enable-threads: 1
threads: 1
workers: 2
master: 1
socket: :8001
plugin: python
chown-socket: nginx:nginx
pythonpath: /usr/lib/python2.6/site-packages/nailgun
touch-reload: /var/lib/nailgun-uwsgi
virtualenv: /usr
module: nailgun.wsgi
buffer-size: 49180
listen: 4096
pidfile: /tmp/nailgun.pid
logto: /var/log/nailgun/app.log
mule: 1
lazy: true
shared-pyimport: /usr/lib/python2.6/site-packages/nailgun/utils/mule.py
And here is the line from Python application log:
GET /api/nodes/ => generated 2 bytes in 10199124 msecs
uWSGI version is 2.0.3.
Can anyone please give a hint?
Related
I have a Flask back end that is functional without using uwsgi and nginx.
I'm trying to deploy it on an EC2 instance with its front-end.
No matter what I do, I can't reach the back-end. I opened all the ports for testing purposes but that does not help.
Here's my uwsgi ini file:
[uwsgi]
module = main
callable = app
master = true
processes = 1
socket = 0.0.0.0:5000
vacuum = true
die-on-term = true
Then I use this command to start the app:
uwsgi --ini uwsgi.ini
The message returned is
WSGI app 0 (mountpoint='') ready in 9 seconds.
spawned uWSGI worker 1 (and the only) PID: xxxx
Then here is my Nginx conf file:
server {
server_name my_name.com www.ny_name.com
location / {
root /home/ubuntu/front_end/dist/;
}
location /gan {
proxy_pass https:localhost:5000/gan;
}
## below https conf by certbot
}
If my understanding is correct, whenever a request reaches "my_name.com/gan..." it will be redirected to the localhost on the port 5000 where the back-end is started by uwsgi.
But I can't reach it. I'm trying to simply do a get request on "my_name.com/gan" on my browser (it should return a random image) but I get a 502 by nginx.
Important to note, the front-end works fine and I can access it on browser.
My guess is that url is not in proper form
Try
proxy_pass http://0.0.0.0:5000;
The one click django with ubuntu 16, nginx, and gunicorn is not routing my domain name. When I type the IP address into the address bar it works but when when I use the domain I get 502 Bad Gateway nginx/1.10.3 (Ubuntu). Looking at the nginx error log I see:
2017/10/16 19:05:18 [error] 23017#23017: *80 upstream prematurely closed connection while reading response header from upstream, client: redacted server: _, request: "GET / HTTP/1.1", upstream: "http://unix:/home/django/gunicorn.socket:/"
I followed the steps here:
https://www.digitalocean.com/community/tutorials/how-to-point-to-digitalocean-nameservers-from-common-domain-registrars#registrar-godaddy
and here:
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean
But I must have done something wrong. Anyone have any ideas how to solve this. I am brand new to DO, Django, and really web dev.
updating NS servers take up to 48 hours. If you already updated 48 hours ago, then clear browser cache then again access to your domain. Normally after 24 hours, you can access to your domain over the browser. Secondly, check the access to /etc/nginx/sites-available
Your upstream in the nginx configs is wrong, see the nginx log: http://unix:/.... ? It should be either http://... or unix:/... - depends on how your Django setup is configured.
Check your nginx configs in /etc/nginx/sites-available/ or /etc/nginx/conf.d and fix the location of the upstream and reload nginx to fix the issue.
I have a Django application deployed on a VM using uWSGI & Nginx setup.
I would like to print some data, by passing the required information to a printer that is configured on the same network using a socket:
printer_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
printer_socket.connect(('10.125.125.2', 9001))
printer_socket.send(bytes(source='data', encoding='utf-8'))
Note: the IP address and the port are here for illustration purposes.
Problem: I get a Err 111 Connection refused error, however. The error is triggered at the printer_socket.connect step.
Additional information: The python code that initializes the socket, connects to the required IP address/Port and sends the data works fine when it's run from the python interactive shell.
Question: What do I need to configure in order to allow opening sockets from a django web application, deployed using uWSGI and Nginx?
Please keep in mind that the configuration of the project is out of the scope of this question. I don't have troubles configuring the app. The app works fine. I am specifically interested in how to allow opening sockets from a web app, served using uWSGI + Nginx setup
UPDATE
Here's the .ini configuration file for the uWSGI.
[uwsgi]
project = App
uid = user
base = /home/%(uid)
chdir = %(base)/%(project)
home = %(base)/Venv/%(project)
module = %(project).wsgi:application
# daemonize = %{base}/uwsgi/%{project}.log
logto = /home/user/logs/uwsgi/%{project}.log
master = true
processes = 5
socket = /run/uwsgi/%(project).sock
chown-socket = %(uid):www-data
chmod-socket = 777
vacuum = true
buffer-size=32768
Thank you.
I am trying to get my Python application to run on port 80 so I can host my page to the Internet and see my temperature and all that remotely.
I get a 402 Bad Request error and I can't seem to figure out why. It seems it's having trouble writting my .sock file to a temp directory.
I am following this tutorial.
https://iotbytes.wordpress.com/python-flask-web-application-on-raspberry-pi-with-nginx-and-uwsgi/
/home/pi/sampleApp/sampleApp.py
from flask import Flask
first_app = Flask(__name__)
#first_app.route("/")
def first_function():
return "<html><body><h1 style='color:red'>I am hosted on Raspberry Pi !!!</h1></body></html>"
if __name__ == "__main__":
first_app.run(host='0.0.0.0')
/home/pi/sampleApp/uwsgi_config.ini
[uwsgi]
chdir = /home/pi/sampleApp
module = sample_app:first_app
master = true
processes = 1
threads = 2
uid = www-data
gid = www-data
socket = /tmp/sample_app.sock
chmod-socket = 664
vacuum = true
die-on-term = true
/etc/rc.local just before exit 0
/usr/local/bin/uwsgi --ini /home/pi/sampleApp/uwsgi_config.ini --uid www- data --gid www-data --daemonize /var/log/uwsgi.log
/etc/nginx/sites-available/sample_app_proxy and I verified this moved to sites-enabled after I linked it.
server {
listen 80;
server_name localhost;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass unix:/tmp/sample_app.sock;
}
}
I got all the way to the final step with 100 percent success. After I linked the sample_app_proxy file so it gets copied into /nginx/sites-enabled/ I do a service nginx restart. When I open my browser 'localhost' I get a 502 Bad Request.
I noticed in the nginx logs at the bottom that there was an error.
2017/01/29 14:49:08 [crit] 1883#0: *8 connect() to unix:///tmp/sample_app.sock failed (2: No such file or directory) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///tmp/sample_app.sock:", host: "localhost", referrer: "http://localhost/"
My source code is exactly as you see it in the tutorial, I checked it over many times.
I looked at the /etc/logs/uwsgi.log and found this message at the bottom.
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 7336
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
The -s/--socket option is missing and stdin is not a socket.
I am not sure what is going on and why it doesn't seem to write the .sock file to the /tmp/ directory. The test I did earlier in the tutorial worked fine and the sample_app.sock file showed up in /tmp/ But when I run the application it doesn't seem to work.
I did a lot of searching and I saw many posts saying to use "///" instead on "/" in the /etc/nginx/sites-available/sample_app_proxy file, but whether I use one or three, I still get the 502 error.
uwsgi_pass unix:///tmp/sample_app.sock;
Any help would be greatly appreciated as this is the last step I need to accomplish so I can do remote stuff to my home. Thanks!
Is it possible somehow to pass through the keepalive limitation of uwsgi? If not, what is the best way of persistent connection implementation. I'm using NGiNX + uWSGI (Python), and I want clients to have asynchronous updates from server.
UWSGI supports keep-alive via --http-keepalive option if you receive requests via http.
/tmp$ cat app.py
def application(env, start_response):
content = b"Hello World"
start_response('200 OK', [
('Content-Type','text/html'),
('Content-Length', str(len(content))),
])
return [content]
Run with:
/tmp$ uwsgi --http=:8000 --http-keepalive -w app &> /dev/null
And we can see connect calls via strace:
~$ strace -econnect wrk -d 10 -t 1 -c 1 http://127.0.0.1:8000
connect(3, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
Running 10s test # http://127.0.0.1:8000
1 threads and 1 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 92.32us 56.14us 2.81ms 97.20%
Req/Sec 11.10k 389.34 11.84k 68.32%
111505 requests in 10.10s, 7.98MB read
Requests/sec: 11040.50
Transfer/sec: 808.63KB
+++ exited with 0 +++
See? Only one connection.
You are talking about two different things. If you want persistent connections from your clients to your app, you may want to use async modes (via ugreen, gevent...). In that way you will be able to maintain thousand of concurrent connections. Keepalive is a way to route multiple requests to the same connections, but this is pretty useless for your purpose. Instead If you are referring to persistent connections between nginx and uWSGI, there is no way (currently) in nginx to reach such behaviour. You may want to follow this ticket:
http://projects.unbit.it/uwsgi/ticket/66
it is about the fastrouter, but will be applied in httprouter too. But it is still under heavy development.