It is no time to move my Python Eve Api into a production environment. There are several ways to do this and the most common requirements are:
Error Logging
Automatic Respawn
Multiple Processes (if possible)
The best solution I found is to have a nginx server as frontend server.
With python eve running on the uWSGI middleware.
The problem: I have a custom __main__ which is not called by uwsgi.
Does anyone have this configuration running or another proposal? As soon as it works, I will share a running configuration.
thank you.
Solution (Update):
Based on the proposial below I moved the Eve() Method to the init.py and run the app with a sperate wsgi.py.
Folder structure:
webservice/ init.py
webservice/modules/...
settings.py
wsgi.py
Where init.py contains
app = Eve(auth=globalauth.TokenAuth)
Bootstrap(app)
app.config['X_DOMAINS'] = '*'
...
and wsgi.py contains
from webservice import app
if __name__ == "__main__":
app.run()
wsgi.ini
[uwsgi]
chdir=/var/www/api/prod
module=wsgi:app
socket=/tmp/api.sock
processes=1
master=True
pidfile=/tmp/api.v1.pid
max-requests=5000
daemonize=/var/www/api/logs/prod.api.log
logto=/var/www/api/logs/uwsgi.log
nginx.conf
location = /v1 { rewrite ^ /v1/; }
location /v1 { try_files $uri #apiWSGIv1; }
location #apiWSGIv1 {
include uwsgi_params;
uwsgi_modifier1 30;
uwsgi_pass unix:/tmp/digdisapi.sock;
}
start command:
uwsgi --ini uwsgi.ini
WSGI containers expect a callable/function to run, they do not execute your 'main' entry. With run:Eve you are asking uWSGI to execute (at every request) the "Eve" function in the "run" module (that is obviously wrong)
Move
app = Eve(auth=globalauth.TokenAuth)
out of the __main__ check and tell uWSGI to use the 'app' callable in the "run" module with
module = run:app
Related
i have a react frontend imported inside a django backend. communication between the two is done through django-rest-framework. on the react's side, fetching is done through relative paths therefore in my package.json i have added the line:
"proxy": "http://127.0.0.1:8000",
django is hosting react-app locally without problems when i run: python3 manage.py runserver.
on the remote i am trying to use nginx with gunicorn to deploy this app on aws ubuntu instance and run into the problem:
first, i'm running python3 manage.py collectstatic
later, i'm pointing nginx to that static_files for the index.html
success! nginx serves react static files
use gunicorn myapp.wsgi -b 127.0.0.1:8000 to run django
problem! nginx served react files do not fetch anything. fetch does not call for this local path but instead calls public ip of aws instance. also, i cannot simulate get/post requests to the django backend because i think nginx "covers" django's gunicorn generated paths.
please tell how can i connect nginx-served react frontedn to gunicorn run django
my nginx sites-enabled/example
server {
listen 80 default_server;
listen [::]:80 default_server;
root /home/ubuntu/fandigger/frontend/build/;
server_name public_ip_without_port;
location / {
try_files $uri $uri/ =404;
}
}
I'm trying to deploy a web app built with Django/Redux/React/Webpack on a Digital Ocean droplet. I'm using Phusion Passenger and Nginx on the deployment server.
I used create-react-app to build a Django app which has a frontend that uses React/Redux, and a backend api that uses django-rest-framework. I built the frontend using npm run build.
The Django app is configured to look in the frontend/build folder for its files and everything works as expected, including authentication. It's based on this tutorial: http://v1k45.com/blog/modern-django-part-1-setting-up-django-and-react/
In settings.py:
ALLOWED_HOSTS = ['*']
TEMPLATES = [
...
'DIRS': [
os.path.join(BASE_DIR, 'frontend/build'),
],
...
]
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'frontend/build/static'),
]
On my development machine, I activate a Python 3.6 virtual environment and run ./manage.py runserver, and the app is displayed at localhost:3000.
On the deployment server, I've cloned the files into a folder in var/www/ and built the frontend.
I've set up Passenger according to the docs with a file passenger_wsgi.py:
import myapp.wsgi
application = myapp.wsgi.application
And the wsgi.py file is in the djangoapp folder below:
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
application = get_wsgi_application()
The Passenger docs only cover a single-part app:
https://www.phusionpassenger.com/library/walkthroughs/start/python.html
https://www.phusionpassenger.com/library/walkthroughs/deploy/python/digital_ocean/nginx/oss/xenial/deploy_app.html
https://www.phusionpassenger.com/library/deploy/wsgi_spec.html
I've tried cloning the tutorial part 1 code directly onto my server and following the instructions to run it. I got this to work on the server by adding "proxy": "http://localhost:8000" to frontend/package.json. If I run the Django server with ./manage.py runserver --settings=ponynote.production_settings xxx.x.x.x:8000
then the app is correctly served up at myserver:8000. However Passenger is still not serving up the right files.
I have changed wsgi.py to say this:
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.production_settings")
application = get_wsgi_application()
The page served by Passenger at URL root now appears to have the right links to the js files such as "text/javascript" src="/static/bundles/js/main.a416835a.js, but the links don't work: the expected js is not present. Passenger is failing to serve the js files from static/bundles/js, even though the Django server can find them.
Very grateful for any help or ideas.
Create-react-app has a fairly opinionated setup for local and production environments.
Locally, running npm start will run a webpack-dev-server, which you would typically access on port 3000. It runs a local nodejs web server to serve the files. You can route requests to your local Django server via the proxy setting.
It's worth noting that at this point there is little or no connection between your React app and Django. If you use the proxy setting, the only thing connecting the two apps is the routing of any requests not handled by your React app to your Django app via the port.
By default in create-react-app (and as noted in the tutorial you mentioned you are following) in production you would run npm run build which will process your create-react-app files into static JS and CSS files, which are then accessed in Django like static files any other Django app.
One thing Django is missing in order to access the static files is a way to know what files are generated when running npm run build. Running a build will typically result in files output like this:
- css
|- main.e0c3cfcb.css
|- main.e0c3cfcb.css.map
- js
|- 0.eb5a2873.chunk.js
|- 0.eb5a2873.chunk.js.map
|- 1.951bae33.chunk.js
|- 1.951bae33.chunk.js.map
A random hash is added to filenames to ensure cache busting. This is where webpack-bundle-tracker and django-webpack-loader come in. When build files are generated, an accompanying file is also created called manifest.json, listing the files created for the build. This is generated in Webpack and picked up by django-webpack-loader so that Django can know which files to import.
It is possible to run a nodejs server in production, or to use server-side rendering, but if you're following the tutorial you mentioned and using create-react-app default settings, then running npm run build and deploying the static files is the simplest, safest option.
Nothing in any of the Passenger deployment links you mention cover anything beyond deploying a Python/Django app - you would need to manage two apps and deployments to have both Django and React running as servers in production.
Note that the tutorial you mention covers how to get your build files into Django in production, but you will need to ensure that you have webpack-bundle-tracker, django-webpack-loader and your Django staticfiles configuration all configured to work together.
The key missing setting was the 'location' setting in the Passenger config file.
Although the Django server serves up the static files, including the build files for your React app, Nginx doesn't see any static files except those in a 'public' directory.
So to deploy a Django app built with Webpack to production, you need to tell Nginx about those files. If you're using Passenger, these settings are probably in a separate Passenger config file. 'alias' is the command to use in this case where the folder has a different name from 'static' (which is where the web page links point).
If you use a virtual environment for your app, you need to specify where Passenger can find the right Python executable.
/etc/nginx/sites-enabled/myapp.conf
server {
listen 80;
server_name xx.xx.xx.xx;
# Tell Passenger where the Python executable is
passenger_python /var/www/myapp/venv36/bin/python3.6;
# Tell Nginx and Passenger where your app's 'public' directory is
# And where to find wsgi.py file
root /var/www/myapp/myapp/myapp;
# Tell Nginx where Webpack puts the bundle folder
location /static/ {
autoindex on;
alias /var/www/myapp/myapp/assets/;
}
# Turn on Passenger
passenger_enabled on;
}
Passenger uses the wsgi.py file as an entry point to your app. You need a passenger_wsgi.py file one level above the wsgi.py file. This tells Passenger where to find the wsgi.py file.
/var/www/myapp/myapp/passenger_wsgi.py
import myapp.wsgi
application = myapp.wsgi.application
/var/www/myapp/myapp/myapp/wsgi.py
If you are using a separate production_settings.py file, make sure this is specified here.
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.production_settings")
application = get_wsgi_application()
i actually have a rest api written in python with flask and flaskrestful extension.
I use gevent WSGI
def runserver():
api.debug = True
http_server = WSGIServer(('', 5000), api)
http_server.start()
All works like a charm on my machine.
I want go in production on a linux vm,on the internet i searched for hours,i don't choosed mod_wsgi because gevent doesn't work properly with it,so i prefer use nginx.
On the internet i saw flask apps hosted with uWSGI,my answer is i need to use uWSGI?
Even i use geventWSGI in my flask application?
How to work with this?
In case i don't need uWSGI,i only need to config nginx sites to pass the request properly to my flask app?
I'm newbie to all this so i'm a little confused.
Thanks in advance
You can run Uwsgi in Gevent mode http://uwsgi-docs.readthedocs.org/en/latest/Gevent.html and then route all flask requests to it via nginx.
server {
listen 80;
server_name customersite1.com;
access_log /var/log/customersite1/access_log;
location / {
root /var/www/customersite1;
uwsgi_pass 127.0.0.1:3031;
include uwsgi_params;
}
see http://uwsgi-docs.readthedocs.org/en/latest/Nginx.html for more details
I am running a Flask web app using gunicorn+Nginx. I run gunicorn in daemon mode. I configured gunicorn and nginx to log their access and error to files. But I just cannot get Flask logs to a file.
I use a shell file to start my app with gunicorn:
#!/bin/bash
export VIRTUAL_ENV="/to/virtual/path"
export PATH="$VIRTUAL_ENV/bin:$PATH"
source "$VIRTUAL_ENV/bin/activate"
NAME="hello"
NUM_WORKERS=1
exec gunicorn hello:app \
--name $NAME \
--workers $NUM_WORKERS \
--log-level=debug \
--daemon \
--pid $VIRTUAL_ENV/logs/pid_gunicorn \
--access-logfile $VIRTUAL_ENV/logs/access_gunicorn.log \
--error-logfile $VIRTUAL_ENV/logs/error_gunicorn.log
And in my flask app I add logging as per doc requires:
app.debug = False
...
if __name__ == '__main__':
if app.debug != True:
import logging
from logging.handlers import RotatingFileHandler
handler = RotatingFileHandler("flask.log", maxBytes=10000, backupCount=1)
handler.setLevel(logging.DEBUG)
app.logger.addHandler(handler)
app.logger.debug("test!!")
app.run()
I also added app.logger.debug at other places.
When I start gunicorn without --daemon, the logging file works fine. But once I add --daemon then no logs are generated.
I tried to use print but it only works without --daemon too.
I have searched a while and it seems gunicorn does not support application logging. But I thought logging to a file would be fine?
Does anybody know how I could log out to a file under my settings?
So- you're not actually setting any logging. Let me explain-
The first argument to gunicorn is your app. gunicorn hello:app. When you launch gunicorn, it will use regular python imports, basically from hello import app.
In your file hello.py, you are setting up your logging. But, you have that chunk of code wrapped in a if __name__ == "__main__": block. If you do python hello.py that will work. But that is NOT what gunicorn is doing. None of that code is being executed by your service (and you should notice that- after all, your development server is not also running...)
Setup logging at the top of the file, outside of the if block. You would also have the option to set gunicorn to capture-output, in which case it would handle getting your app output into log files. This is PROBABLY closer to what you want. If you DID use the logging config you have, gunicorn is going to run more than one separate process, and they will all be trying to log to the same file on disk.
I am not sure what you mean by "the logging works fine" - its going to a file? My expectation would be that without --daemon it means gunicorn is running in the foreground and its output shows up in your terminal (but isn't going to disk unless you have redirected the output of your script to disk? or are starting it with systemd maybe?)
The pythonic way to run a daemon process is using something like Supervisord, forget bash, its python. Have you considered using nginx as a proxy pass? Gunicorn can handle the WSGI. I think it is available since 1.3.13. It is meant for websockets, but will work even if you running a http protocol.
something like
server {
listen 80;
server_name localhost;
access_log /var/log/nginx/example.log;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
For your purpose, please set --capture-output to True. It should forward your application output to the error log file.
Another issues I got is that in the daemon mode output to stdout and stderr goes to nowhere. It is annoying when gunicorn runs inside a Docker container.
Given the configuration below for : Nginx, Uwsgi and Flask.
If I move the Flask application from /test/ to production I must update the Nginx configuration, and preferably only that configuration. So a solution would by if the Flask #app.route('/test/') would be relative, so in a non existing syntax: #app.route('[root]'). I can't find a way to accomplish this. That being said, I presume there 'is' a way because if I must alter all the paths in Flask seems so impracticable.
Nginx:
location /test/ {
uwsgi_pass 127.0.0.01:3031;
include uwsgi_params;
}
Uwsgi:
uwsgi --socket 127.0.0.1:3031 --wsgi-file myflaskapp.py --callable app --proces$
Flask:
from flask import Flask
app = Flask(__name__)
#app.route('/test/')
def index():
return "<span style='color:red'>I am app 1</span>"
I'm trying to accomplish to move my application to any sub-path of the domain (site.com/apps, site.com/congres/, and so forth) and only to update the NGINX configuration.
You're probably thinking of #app.route('/'). The route URL appears to be absolute, but it is actually relative to the root URL of your application.
This is actually covered in Flask's documentation. You only specify the URL to bind your application to in the nginx configuration; Flask should be able to detect this location from the WSGI environment and build its routes accordingly.