Solved
see bottom for fixes etc.
I'm trying to connetc my django app with nginx via uwsgi, but it seems that the passing of data to uwsgi does not happen. I've tested that the uwsgi server is running properly and do not get any log output on either end.
uwsgi.ini
[uwsgi]
module = MyDjangoApp.wsgi:application
master = True
;http-socket = :8001 #to run uwsgi on its one to ensure that it works
socket = :8001
vacuum = True
max-requests = 5000
plugin = python3
enable-threads = True
/etc/nginx/sites-available file tree
default
serverDjango_nginx.conf
serverDjango_nginx.conf:
# the upstream component nginx needs to connect to
upstream django {
#server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8000;
# the domain name it will serve for
server_name 127.0.0.1; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
# location /media {
# location /media {
# alias /path/to/your/mysite/media; # your Django project's media files $
# }
# location /static {
# alias /path/to/your/mysite/static; # your Django project's static files$
# }
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/pi/Server/uwsgi_params; # the uwsgi_params file you in$
}
UPDATE:
first the site wasn't enabled...
second I've put a link to it in the /etc/nginx/sites-enabled/ as the documentation said
now i get this wierd error:
2020/03/29 12:14:18 [emerg] 4344#4344: open() "/etc/nginx/sites-enabled/serverDjango_nginx.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:63
I looked into the corresponding config file to find
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
and now I am wondering why id does not find the file I've linked to
sudo ln -s ~/etc/nginx/sites-available/serverDjango_nginx.conf /etc/nginx/sites-enabled/
Update No2
so the linkagepath was wrong because of:
sudo ln -s ~/etc/nginx/sites-available/serverDjango_nginx.conf
the tilde there, which forced a relative path, therefore invalidating the link
This site is a great tool for generating your Nginx config files. In your server block you should be putting the listen to either 80 or 443 (if you want it to be accessible via standard http/s ports). You also should put your server_name to be your domain such as www.google.com google.com (yes include both) or whatever domain(s) you want to serve your Django site on.
I don't use the uwsgi like you do under location either. I just use proxy_pass like proxy_pass http://localhost:8001 and then pass an include for my proxy config.
Related
I'm new to python and have been put on a task of building out a spreadsheet parser. I've created a python script that reads an xlsx file and parses the data. I have an Nginx server set up that this will be hosted on. I need this script to be an API endpoint so I can pass the parsed data back as JSON. I have been reading about WSGI for production server and have tried to follow the route of building that out. I am able to serve a path on the server and have it output the wsgi python script. The script has the following:
def application(environ, start_response):
status = '200 OK'
html = '<html>\n' \
'<body>\n' \
' Hooray, mod_wsgi is working\n' \
'</body>\n' \
'</html>\n'
response_header = [('Content-type','text/html')]
start_response(status, response_header)
return [html]
I'm a little confused as to how to receive a request and send back json with my excel parser class? Thanks and I hope I'm being clear. I do have a flask server that works, but I do not know how to have it constantly running to serve my endpoint:
app = Flask(__name__)
#app.route('/parser/direct_energy', methods=['GET'])
def get_data():
return jsonify(commissions_data)
if name == 'main':
app.run(host='0.0.0.0')
You don't want to use raw WSGI for this.
Use a package such as FastAPI (or Flask) to make everything easier for you.
For instance, using FastAPI, an app with an endpoint to receive a binary (Excel) file and return a JSON response is approximately
from fastapi import FastAPI, File, UploadFile
app = FastAPI()
#app.post("/process")
def process_file(file: UploadFile = File()):
response = my_data_processing_function(data)
return {"response": response}
See:
To get going: https://fastapi.tiangolo.com/tutorial/
To process files: https://fastapi.tiangolo.com/tutorial/request-files/
To deploy your service (behind Nginx): https://fastapi.tiangolo.com/deployment/
I use python/flask for development & gunicorn for production.
To get it to accept HTTP requests, I use function decorators. Its the most common way.
#application.route('/epp/api/v1.0/request', methods=['POST'])
def eppJSON():
if flask.request.json is None:
return abort(400, "No JSON data was POSTed")
return jsonRequest(flask.request.json, flask.request.remote_addr)
So here, the url /epp/api/v1.0/request accepts POSTed JSON and returns JSON
When you run flask in dev mode it listens on http://127.0.0.1:5000
https://github.com/james-stevens/epp-restapi/blob/master/epprest.py
https://github.com/james-stevens/dnsflsk/blob/master/dnsflsk.py
These are both python/flask projects of mine. Feel free to copy. They each run multiple instances of the python code in a single container load-balanced by nginx - pretty neat combination.
UPDATE
I got things working throug NGinx, flask, and GUnicorn. However, my flask app is only working when I go to '/'. If I go to a route such as /parser/de/v1 I get a 404 Not Found.
Here is my setup for NGinx:
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html/excel_parser;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name 208.97.141.147;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://127.0.0.1:5000;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
my nginx.conf looks slightly different, partly becuase I am running multiple WSGI instances, then getting nginx to load-balance over them
worker_processes 3;
events {
worker_connections 1024;
}
user daemon;
http {
access_log off;
error_log stderr error;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream dns_servers {
server unix:/ram/dnsflsk_1.sock;
server unix:/ram/dnsflsk_2.sock;
server unix:/ram/dnsflsk_3.sock;
}
server {
listen 800 ssl;
server_name localhost;
ssl_certificate certkey.pem;
ssl_certificate_key certkey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://dns_servers;
}
}
}
But with this, all the URLs are passed to the python/wsgi
I would like to take some user input, run a few lines of Python, and display the results on the web.
I have a domain pointed to a server on DigitalOcean, and am following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04
I've been able to get through the tutorial and it does work, however I'd of course like for my site not to be completely overridden by the phrase "Hello there!". I would like to display the results at a non-root location such as https://example.com/myproject/.
The domain I have has already been secured using Let's Encrypt & CertBot.
I am using a single nginx config file called default - the rest of the tutorial I followed exactly. The problem seems to be in the proxy_pass directive. When I move it to the / location block, it works and my index page is overridden with "Hello there!". When I move proxy_params and proxy_pass to a /myproject/ location block, I get a 404 error. I've tried a handful of things and tried to understand location blocks better, but to no avail.
Here is the Nginx config file:
# Default server configuration
#
server {
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.php;
server_name example.com www.example.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /myproject/ {
include proxy_params;
proxy_pass http://unix:/home/jackson/myproject/myproject.sock;
}
# pass the PHP scripts to FastCGI server
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
location ~ /\.ht {
deny all;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Any help would be greatly appreciated. Thank you!
I needed to change the #app.route decorator in the .py file to the correct directly, and I think crucially specify the GET and POST methods.
from flask import Flask
app = Flask(__name__)
#app.route("/myproject/", methods=['GET','POST'])
def hello():
return "<h1 style='color:blue'>Hello There!</h1>"
if __name__ == "__main__":
app.run(host='0.0.0.0')
I'm trying to migrate django app from ubuntu 14.04 to raspberry pi ( raspbian os)
for ubuntu i have done http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html and it worked.
in raspbian it's not so simple.
this is my bills_nginx.conf in /etc/nginx/sites-enabled
bills_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:/var/www/html/bills/bills/bills.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 192.168.5.5; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /var/www/html/bills/bills/bills/media; # your Django project's media files - amend as required
}
location /static {
alias /var/www/html/bills/bills/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /var/www/html/bills/bills/uwsgi_params; # the uwsgi_params file you installed
}
}
and this is my UWSGI INI file:
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /var/www/html/bills/bills
# Django's wsgi file
module = bills.wsgi
# the virtualenv (full path)
home = /home/seb/.virtualenvs/bills3
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 10
# the socket (use the full path to be safe
socket = /var/www/html/bills/bills/bills.sock
# ... with appropriate permissions - may be needed
uid =www-data
gid=www-data
chown-socket=www-data:www-data
chmod-socket = 666
# clear environment on exit
vacuum = true
daemonize=/var/log/uwsgi/bills3.log
error_log=/var/log/nginx/bills3_error.log
in error.log I get:
2017/03/08 10:27:43 [error] 654#0: *1 connect() to unix:/var/www/html/bills/bills/bills.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.5.2, server: 192.168.5.5, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:/var/www/html/bills/bills/bills.sock:", host: "192.168.5.5:8000"
please help me get it working :)
chmod-socket, chown-socket, gid, uid, socket For uWSGI and nginx to communicate over a socket, you need to specify the permissions and the owner of the socket. 777 as chmod-socket is much too liberal for production. However, you may have to mess around with this number to get it correct, so everything necessary can communicate. If you don’t take care of your socket configurations, you will get errors such as:
So make sure the permission of the folder ..
I think better way
$ sudo mkdir /var/uwsgi
$ sudo chown www-data:www-data /var/uwsgi
And change the socket path
upstream django {
server unix:/var/uwsgi/bills.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first) }
More reference : Pls check A great article
http://monicalent.com/blog/2013/12/06/set-up-nginx-and-uwsgi/
Also I have the same issue before may you can check my configuration too
nginx django uwsgi page not found error
I am running a Python Flask application with Gunicorn and Nginx as a reverse proxy. Pages are served by Gunicorn and Nginx is serving files from my static folder directly.
It's working correctly except where I get a 404 on a static resources.
I have setup custom error handlers in Flask to show 'pretty' error pages on HTTP error codes. This is also working fine when I request a non-existent page.
However, when a static resource doesn't exist then nginx serves its own default 404 page instead of the Flask app's 404 page (which makes sense since it's bypassing Gunicorn). Is there a way to tell nginx to serve the Flask error handler page via Gunicorn if it encounters an error serving a static resource?
Here is my current nginx conf file for this server:
server {
listen 80;
server_name example.com;
access_log /home/aaron/dev/apwd-flask/logs/access.log;
error_log /home/aaron/dev/apwd-flask/logs/error.log;
location / {
include proxy_params;
proxy_pass http://localhost:8000;
}
location /static {
alias /home/aaron/dev/apwd-flask/app/static/;
}
}
I'm thinking (hoping) I can use an error_page directive to give control back to Gunicorn and tell it to serve the appropriate custom error handler, but haven't been able to figure out if that's possible or how to do it from the documentation.
Answering my own question as I was able to locate an answer to this after alot of searching so posting it here for the benefit of anyone else who may have the same issue. I expect this would work for any backend, not just gunicorn.
https://www.nginx.com/resources/admin-guide/serving-static-content/
In the section entitled 'Trying Several Options' the final example shows the solution to this problem, using the try_files directive in the static location block I can tell nginx to pass the request to a named location if it fails to find the requested file.
Here is my new nginx conf file which is working as expected now for non-existent static file requests:
server {
listen 80;
server_name example.com;
access_log /home/aaron/dev/apwd-flask/logs/access.log;
error_log /home/aaron/dev/apwd-flask/logs/error.log;
location #apwd_flask {
include proxy_params;
proxy_pass http://localhost:8000;
}
location / {
try_files $uri #apwd_flask;
}
location /static {
alias /home/aaron/dev/apwd-flask/app/static/;
try_files $uri #apwd_flask;
}
}
Now my location #apwd_flask is the gunicorn backend and when my static files aren't found by nginx serving directly, it sends the request to the backend which serves its own 404 response.
You need to change the owner of files in below directory
/home/aaron/dev/apwd-flask/app/static/
In order to give access to nginx user to read files in the static directory change the owner to www-data or change the owner group to www-data and give the read access to all files in this directory.
You can do this by running below command:
chown -R www-data:www-data /home/aaron/dev/apwd-flask/app/static/
I'm having trouble setting up my site with https. At the moment, I have my nginx server set to listen to both http and https responses.
However, now I only want to allow https and redirect any http requests to htpps.
I tried this post without any luck: How to deploy an HTTPS-only site, with Django/nginx?
What is the recommended way of doing this in Django 1.7+?
Below is my ngninx.conf file:
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:///uwsgi-tutorial/mysite/mysite.sock; # for a file socket
# server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
listen 443 default_server ssl;
#ssl on;
ssl_certificate /uwsgi-tutorial/conf/www.example.com.chained.crt;
ssl_certificate_key /uwsgi-tutorial/conf/www.example.com.key;
# the domain name it will serve for
# server_name localhost; # substitute your machine's IP address or FQDN
server_name example.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /uwsgi-tutorial/mysite/media; # your Django project's media files - amend as required
}
location /static {
alias /uwsgi-tutorial/mysite/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /uwsgi-tutorial/mysite/uwsgi_params; # the uwsgi_params file you installed
}
}
There is already a library that does this job just fine - sslify:
https://github.com/rdegges/django-sslify
Just proceed with instructions on github page.