Can not reach Flask app served with uWSGI and Nginx - python

I have a Flask back end that is functional without using uwsgi and nginx.
I'm trying to deploy it on an EC2 instance with its front-end.
No matter what I do, I can't reach the back-end. I opened all the ports for testing purposes but that does not help.
Here's my uwsgi ini file:
[uwsgi]
module = main
callable = app
master = true
processes = 1
socket = 0.0.0.0:5000
vacuum = true
die-on-term = true
Then I use this command to start the app:
uwsgi --ini uwsgi.ini
The message returned is
WSGI app 0 (mountpoint='') ready in 9 seconds.
spawned uWSGI worker 1 (and the only) PID: xxxx
Then here is my Nginx conf file:
server {
server_name my_name.com www.ny_name.com
location / {
root /home/ubuntu/front_end/dist/;
}
location /gan {
proxy_pass https:localhost:5000/gan;
}
## below https conf by certbot
}
If my understanding is correct, whenever a request reaches "my_name.com/gan..." it will be redirected to the localhost on the port 5000 where the back-end is started by uwsgi.
But I can't reach it. I'm trying to simply do a get request on "my_name.com/gan" on my browser (it should return a random image) but I get a 502 by nginx.
Important to note, the front-end works fine and I can access it on browser.

My guess is that url is not in proper form
Try
proxy_pass http://0.0.0.0:5000;

Related

504 Connection Error Flask Nginx uWSGI Ubuntu

Hello all I was hoping I could receive some guidance on this matter.
I have a flask application that is setup on an ubuntu server. It uses ssh to create a tunnel to a Centos 7 Server that has it's mysql database. Upon running this application with python on the Ubuntu Server I'm able to perfectly login to my application and view data from database from domain ip. Now upon trying to run the application on nginx and uWSGI I can actually get to the login page from my domain name. But upon entering my credentials and trying to login, the page loads for around a minute and the I receive the 504 Connection Time Out Error
Would I be receiving this because my application is trying to reach out to another server while processing data from me. I'm not sure and nothing has been a help yet. Here are my files
server block
server {
listen 80;
server_name itinareport.tk www.itinareport.tk;
location / {
uwsgi_read_timeout 600;
include uwsgi_params;
uwsgi_pass unix:/home/pinchrep2/itinarep/itinarep.sock;
}
}
ini file
[uwsgi]
module = wsgi:app
master = true
processes = 5
socket = itinarep.sock
chmod-socket = 660
vacuum = true
die-on-term=true
wsgi.py
from main import app
if __name__ == "__main__":
app.run()
service file
[Unit]
Description=uWSGI instance to serve itinarep
After=network.target
[Service]
User=pinchrep2
Group=www-data
WorkingDirectory=/home/pinchrep2/itinarep
Environment="PATH=/home/pinchrep2/itinarep/it_venv/bin"
ExecStart=/home/pinchrep2/itinarep/it_venv/bin/uwsgi --ini itinarep.ini
[Install]
WantedBy=multi-user.target
Here is where I ssh from main py file
main.py
sshforward = SSHTunnelForwarder(
("public ip", 22),
ssh_username = 'user',
ssh_password = 'pass',
remote_bind_address = ('127.0.0.1', 3306)
)
sshforward.start()
local_port = sshforward.local_bind_port
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret_key'
app.config['SQLALCHEMY_DATABASE_URI'] = f"mysql+pymysql://root#localhost:{local_port}/asteriskcdrdb"
if __name__ == "__main__":
app.run(host='0.0.0.0')
Again I just need this to be deployed. Please point in the right direction configuration wise. I can get to application but as soon as logging in I receive this issue.
When your database connection url references "localhost", its really connecting via a unix socket.
You can connnect using a local_bind_address containing a unix socket adding ?unix_socket=/path/to/mysql.sock to the SQLALCHEMY_DATABASE_URI like this answer.
Seems connecting to a remote unix socket is waiting for this upstream issue to be implemented.

402 Bad Request using Python, Nginx, and flask on a Raspberry Pi

I am trying to get my Python application to run on port 80 so I can host my page to the Internet and see my temperature and all that remotely.
I get a 402 Bad Request error and I can't seem to figure out why. It seems it's having trouble writting my .sock file to a temp directory.
I am following this tutorial.
https://iotbytes.wordpress.com/python-flask-web-application-on-raspberry-pi-with-nginx-and-uwsgi/
/home/pi/sampleApp/sampleApp.py
from flask import Flask
first_app = Flask(__name__)
#first_app.route("/")
def first_function():
return "<html><body><h1 style='color:red'>I am hosted on Raspberry Pi !!!</h1></body></html>"
if __name__ == "__main__":
first_app.run(host='0.0.0.0')
/home/pi/sampleApp/uwsgi_config.ini
[uwsgi]
chdir = /home/pi/sampleApp
module = sample_app:first_app
master = true
processes = 1
threads = 2
uid = www-data
gid = www-data
socket = /tmp/sample_app.sock
chmod-socket = 664
vacuum = true
die-on-term = true
/etc/rc.local just before exit 0
/usr/local/bin/uwsgi --ini /home/pi/sampleApp/uwsgi_config.ini --uid www- data --gid www-data --daemonize /var/log/uwsgi.log
/etc/nginx/sites-available/sample_app_proxy and I verified this moved to sites-enabled after I linked it.
server {
listen 80;
server_name localhost;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass unix:/tmp/sample_app.sock;
}
}
I got all the way to the final step with 100 percent success. After I linked the sample_app_proxy file so it gets copied into /nginx/sites-enabled/ I do a service nginx restart. When I open my browser 'localhost' I get a 502 Bad Request.
I noticed in the nginx logs at the bottom that there was an error.
2017/01/29 14:49:08 [crit] 1883#0: *8 connect() to unix:///tmp/sample_app.sock failed (2: No such file or directory) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///tmp/sample_app.sock:", host: "localhost", referrer: "http://localhost/"
My source code is exactly as you see it in the tutorial, I checked it over many times.
I looked at the /etc/logs/uwsgi.log and found this message at the bottom.
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 7336
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
The -s/--socket option is missing and stdin is not a socket.
I am not sure what is going on and why it doesn't seem to write the .sock file to the /tmp/ directory. The test I did earlier in the tutorial worked fine and the sample_app.sock file showed up in /tmp/ But when I run the application it doesn't seem to work.
I did a lot of searching and I saw many posts saying to use "///" instead on "/" in the /etc/nginx/sites-available/sample_app_proxy file, but whether I use one or three, I still get the 502 error.
uwsgi_pass unix:///tmp/sample_app.sock;
Any help would be greatly appreciated as this is the last step I need to accomplish so I can do remote stuff to my home. Thanks!

Run 2 uWSGI on the same server with Nginx

Is it possible to run two separate uWSGI process on the same server with Nginx serving up both sets of static files?
So far, this setup appears to work sometimes, but requests are failing sometimes....
nginx.conf:
http {
upstream deploy {
server 127.0.0.1:8002;
}
server {
# nginx config - deploy
}
upstream staging {
server 127.0.0.1:8001;
}
server {
# nginx config - staging
}
}
I do have on both uWSGI.ini files master=True. Here's what they both look like:
uwsgi.ini
[uwsgi]
home = /home/bsdev/.virtualenvs/bs_py34/
env = DJANGO_SETTINGS_MODULE=myproject.settings.persistent
socket = 127.0.0.1:8003
chmod-socket = 666
uid = bsdev
gid = bsdev
master = true
enable-threads = true
processes = 4
chdir = /www/django/releases/persistent/bsrs/bsrs-django/myproject
module = myproject.wsgi:application
pidfile = /tmp/myproject-master-persistent.pid
harakiri = 10
max-requests = 5000
logdate = true
vacuum = true
daemonize = /var/log/uwsgi/myproject-persistent.log
logdate = true
Any ideas on how to get this to work?
Does anyone have a working configuration?
It seems like having them both as master, or if the same uwsgi process is serving both, that requests are getting dropped....
Thanks in advance.
Stack:
Nginx
uwsgi
Django 1.8
To hold two and more separate projects, l'd recommend the following:
Install separate uWSGI for each project in its virtualenv
Create separate virtual servers in nginx/sites-available for each project, pointing at its own uWSGI

Why are Django debugging errors suppressed by uWSGI?

I am running a nginx and uWSGI setup with Django but the errors are no longer shown in Django even though debugging is enabled: DEBUG = True. All errors that occur are saved in the uWSGI log file instead. How can i enable Django to show them again?
nginx.conf:
server {
access_log /var/www/servers/myserver/development/logs/nginx_access.log;
error_log /var/www/servers/myserver/development/logs/nginx_error.log warn;
server_name localhost
listen [::]:80;
charset utf-8;
client_max_body_size 75M;
location / {
uwsgi_pass unix:/var/www/servers/myserver/development/sockets/myserver-dev.sock;
include /var/www/servers/myserver/development/configs/uwsgi_params;
deny all;
}
location /static {
autoindex on;
alias /var/www/servers/myserver/development/static;
}
location /media {
autoindex on;
alias /var/www/servers/myserver/development/media;
}
}
uwsgi.conf:
[uwsgi]
;enable master process manager
master = true
;spawn 2 uWSGI worker processes
workers = 2
;unix socket (referenced in nginx configuration)
socket = /var/www/servers/myserver/development/sockets/myserver-dev.sock
# set mode of created UNIX socket
chmod-socket = 666
# place timestamps into log
log-date = true
# user identifier of uWSGI processes
uid = www-data
# group identifier of uWSGI processes
gid = www-data
; number of worker processes
;processes = 2
;vacuum = true
; project-level logging to the logs/ folder
;logto = /var/www/servers/myserver/development/logs/uwsgi.log
; django >= 1.4 project
chdir = /var/www/servers/myserver/development/webapp
wsgi-file = /var/www/servers/myserver/development/webapp/webapp/wsgi.py
;enable-threads = true
virtualenv = /var/www/servers/myserver/development/env
vacuum = true
env = DJANGO_SETTINGS_MODULE=webapp.settings
pidfile = /var/www/servers/myserver/development/logs/myserver-dev.pid
;harakiri = 20 # respawn processes taking more than 20 seconds
;max-requests = 5000 # respawn processes after serving 5000 requests
Try double checking that DEBUG == True is actually correct. I suspect it's not. You could do this in one of your views with the following code.
## inside a view function
from django.conf import settings
raise Exception('Value of DEBUG is %s' % (settings.DEBUG,))
Restart uWSGI and try visiting that view. You should see if your supposition is correct immediately.
Have you configured log handlers in your application? I'm not really familiar with Django anymore but in Flask having debug=True with uwsgi will actually strip all your logging handlers. Instead you need to set up handlers and have debug=False.
I would have suspected that general error handling won't work within uwsgi in the same way as with runserver - http://librelist.com/browser/flask/2011/10/19/debug-when-deploy-in-uwsgi/#7be089baf631971dfb73a5a7b79e2248

Flask with gevent multicore

What is the clear way to run flask application with gevent backend server and utilize all processor cores? I have idea to run multiple copies of flask application where gevent WSGIServer listen one port in diapason 5000..5003 (for 4 processes) and nginx as load balancer.
But I'm not sure that this way is the best and may be there are some other ways to do it. For example, master process listen one port and workers process incoming connections.
I'll take a shot!
Nginx!
server section:
location / {
include proxy_params;
proxy_pass http://127.0.0.1:5000;
}
Flask App
This is a simple flask app that i will be using for this example.
myapp.py:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
uWSGI
Okay so I know that you said that you wanted to use gevent, but if you are willing to compromise on that a little bit I think you would be very happy with this setup.
[uwsgi]
master = true
plugin = python
http-socket = 127.0.0.1:5000
workers = 4
wsgi-file = myapp.py
callable = app
Gunicorn
If you must have gevent you might like this little setup
config.py:
import multiprocessing
workers = multiprocessing.cpu_count()
bind = "127.0.0.1:5000"
worker_class = 'gevent'
worker_connections = 30
Then you can run:
gunicorn -c config.py myapp:app
Thats right you have a worker for each cpu and 30 connections per worker.
See if that works for you.
If you are really sold on using nginx as a load balancer try something like this in your http section
upstream backend {
server 127.0.0.1:5000;
server 127.0.0.1:5002;
server 127.0.0.1:5003;
server 127.0.0.1:5004;
}
then one of these in the server section
location / {
include proxy_params;
proxy_pass http://backend;
}
Good Luck buddy!

Categories

Resources