Trying to work with Django python pulsar concurrent processing framework.
The wsgi server provided by pulsar can be used through command line as
$python manage.py pulse
which basically starts a HTTP WSGI server, similar to django's dev server.
How can it be set up with Apache webserver with mod_wsgi?
Is this case what you really want is to use pulsar's WSGI server and consume your django app as a regular wsgi application (if you are on django 1.5.x or above it would have created a wsgi.py in your project home). Here's an example:
from pulsar.apps import wsgi
import yourapp
middlewares = [
wsgi.middleware.wait_for_body_middleware,
yourapp.wsgi.application
]
if __name__ == '__main__':
wsgi.WSGIServer(wsgi.handlers.WsgiHandler(middleware=middlewares)).start()
protip: the "wait_for_body_middleware" makes pulsar wait for the full body response to be available before moving on to another request (which is the normal behavior of most webapps)
source:
http://pythonhosted.org/pulsar/apps/wsgi/intro.html#wsgi-server
Related
I have seen that gunicorn reload uses inotify (when installed, which I have). I have verified that the reloader is working, and that some file changes are detected (mainly, changes to gunicorn itself)
But my application code is not included in the list of files being supervised by inotify.
What can I do for gunicorn to supervise my application code?
My application is a django app, with the following wsgi.py:
"""
WSGI config for my project.
This module contains the WSGI application used by Django's development server
and any production WSGI deployments. It should expose a module-level variable
named ``application``. Django's ``runserver`` and ``runfcgi`` commands discover
this application via the ``WSGI_APPLICATION`` setting.
Usually you will have the standard Django WSGI application here, but it also
might make sense to replace the whole Django WSGI application with a custom one
that later delegates to the Django one. For example, you could introduce WSGI
middleware here, or combine a Django application with an application of another
framework.
"""
import os
import sys
from django.core.wsgi import get_wsgi_application
# This allows easy placement of apps within the interior
# core directory.
app_path = os.path.abspath(os.path.join(
os.path.dirname(os.path.abspath(__file__)), os.pardir))
sys.path.append(os.path.join(app_path, 'core'))
# We defer to a DJANGO_SETTINGS_MODULE already in the environment. This breaks
# if running multiple sites in the same mod_wsgi process. To fix this, use
# mod_wsgi daemon mode with each site in its own daemon process, or use
# os.environ["DJANGO_SETTINGS_MODULE"] = "config.settings.production"
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.production")
# This application object is used by any WSGI server configured to use this
# file. This includes Django's development server, if the WSGI_APPLICATION
# setting points here.
application = get_wsgi_application()
# Apply WSGI middleware here.
# from helloworld.wsgi import HelloWorldApplication
# application = HelloWorldApplication(application)
I have a service, which contains 2 Flask-applications - let's name them app and monitoring_app running on different ports. monitoring_app is an utility service which provides metrics collected with prometheus_client. I have faced up with an issue trying run them under the Gunicorn server properly.
I have read through the similar topic:
Multiple Flask Application in single uwsgi
but it seems my problem can't be solved with Dispatcher Middleware.
I can start these applications without Gunicorn like this:
from threading import Thread
import os
from my_project import create_app, create_monitoring_app
def start_app(my_app):
my_app.run(host="localhost",
debug=True,
port=int(os.environ.get("API_PORT", "5000")),
threaded=True,
use_reloader=False)
if __name__ == "__main__":
app = create_app()
app_thread = Thread(target=start_app, daemon=True, args=(app,))
app_thread.start()
monitoring_app = create_monitoring_app()
monitoring_app.run(host="localhost",
port=int(os.environ.get("OPS_PORT", "5001")),
threaded=True,
use_reloader=False,
debug=True)
It works ok for development, but it runs under Flask development server which is not ok for production environment. With Gunicorn I can start them separately:
gunicorn "my_project:create_monitoring_app()" -b "[::]:$OPS_PORT" &
gunicorn "my_project:create_app()" -b "[::]:$API_PORT"
but then they will be in different interpreters and I can't use prometheus_client of app in monitoring_app what is critical for me
What can I do to achieve same behavior as if I run these applications from my development environment? Or may be I am doing something wrong and I should do it in another way?
I'm trying to deploy a web app built with Django/Redux/React/Webpack on a Digital Ocean droplet. I'm using Phusion Passenger and Nginx on the deployment server.
I used create-react-app to build a Django app which has a frontend that uses React/Redux, and a backend api that uses django-rest-framework. I built the frontend using npm run build.
The Django app is configured to look in the frontend/build folder for its files and everything works as expected, including authentication. It's based on this tutorial: http://v1k45.com/blog/modern-django-part-1-setting-up-django-and-react/
In settings.py:
ALLOWED_HOSTS = ['*']
TEMPLATES = [
...
'DIRS': [
os.path.join(BASE_DIR, 'frontend/build'),
],
...
]
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'frontend/build/static'),
]
On my development machine, I activate a Python 3.6 virtual environment and run ./manage.py runserver, and the app is displayed at localhost:3000.
On the deployment server, I've cloned the files into a folder in var/www/ and built the frontend.
I've set up Passenger according to the docs with a file passenger_wsgi.py:
import myapp.wsgi
application = myapp.wsgi.application
And the wsgi.py file is in the djangoapp folder below:
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
application = get_wsgi_application()
The Passenger docs only cover a single-part app:
https://www.phusionpassenger.com/library/walkthroughs/start/python.html
https://www.phusionpassenger.com/library/walkthroughs/deploy/python/digital_ocean/nginx/oss/xenial/deploy_app.html
https://www.phusionpassenger.com/library/deploy/wsgi_spec.html
I've tried cloning the tutorial part 1 code directly onto my server and following the instructions to run it. I got this to work on the server by adding "proxy": "http://localhost:8000" to frontend/package.json. If I run the Django server with ./manage.py runserver --settings=ponynote.production_settings xxx.x.x.x:8000
then the app is correctly served up at myserver:8000. However Passenger is still not serving up the right files.
I have changed wsgi.py to say this:
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.production_settings")
application = get_wsgi_application()
The page served by Passenger at URL root now appears to have the right links to the js files such as "text/javascript" src="/static/bundles/js/main.a416835a.js, but the links don't work: the expected js is not present. Passenger is failing to serve the js files from static/bundles/js, even though the Django server can find them.
Very grateful for any help or ideas.
Create-react-app has a fairly opinionated setup for local and production environments.
Locally, running npm start will run a webpack-dev-server, which you would typically access on port 3000. It runs a local nodejs web server to serve the files. You can route requests to your local Django server via the proxy setting.
It's worth noting that at this point there is little or no connection between your React app and Django. If you use the proxy setting, the only thing connecting the two apps is the routing of any requests not handled by your React app to your Django app via the port.
By default in create-react-app (and as noted in the tutorial you mentioned you are following) in production you would run npm run build which will process your create-react-app files into static JS and CSS files, which are then accessed in Django like static files any other Django app.
One thing Django is missing in order to access the static files is a way to know what files are generated when running npm run build. Running a build will typically result in files output like this:
- css
|- main.e0c3cfcb.css
|- main.e0c3cfcb.css.map
- js
|- 0.eb5a2873.chunk.js
|- 0.eb5a2873.chunk.js.map
|- 1.951bae33.chunk.js
|- 1.951bae33.chunk.js.map
A random hash is added to filenames to ensure cache busting. This is where webpack-bundle-tracker and django-webpack-loader come in. When build files are generated, an accompanying file is also created called manifest.json, listing the files created for the build. This is generated in Webpack and picked up by django-webpack-loader so that Django can know which files to import.
It is possible to run a nodejs server in production, or to use server-side rendering, but if you're following the tutorial you mentioned and using create-react-app default settings, then running npm run build and deploying the static files is the simplest, safest option.
Nothing in any of the Passenger deployment links you mention cover anything beyond deploying a Python/Django app - you would need to manage two apps and deployments to have both Django and React running as servers in production.
Note that the tutorial you mention covers how to get your build files into Django in production, but you will need to ensure that you have webpack-bundle-tracker, django-webpack-loader and your Django staticfiles configuration all configured to work together.
The key missing setting was the 'location' setting in the Passenger config file.
Although the Django server serves up the static files, including the build files for your React app, Nginx doesn't see any static files except those in a 'public' directory.
So to deploy a Django app built with Webpack to production, you need to tell Nginx about those files. If you're using Passenger, these settings are probably in a separate Passenger config file. 'alias' is the command to use in this case where the folder has a different name from 'static' (which is where the web page links point).
If you use a virtual environment for your app, you need to specify where Passenger can find the right Python executable.
/etc/nginx/sites-enabled/myapp.conf
server {
listen 80;
server_name xx.xx.xx.xx;
# Tell Passenger where the Python executable is
passenger_python /var/www/myapp/venv36/bin/python3.6;
# Tell Nginx and Passenger where your app's 'public' directory is
# And where to find wsgi.py file
root /var/www/myapp/myapp/myapp;
# Tell Nginx where Webpack puts the bundle folder
location /static/ {
autoindex on;
alias /var/www/myapp/myapp/assets/;
}
# Turn on Passenger
passenger_enabled on;
}
Passenger uses the wsgi.py file as an entry point to your app. You need a passenger_wsgi.py file one level above the wsgi.py file. This tells Passenger where to find the wsgi.py file.
/var/www/myapp/myapp/passenger_wsgi.py
import myapp.wsgi
application = myapp.wsgi.application
/var/www/myapp/myapp/myapp/wsgi.py
If you are using a separate production_settings.py file, make sure this is specified here.
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.production_settings")
application = get_wsgi_application()
I've built a Django project that works, even after I freeze it using Cx_Freeze and Py2exe.
Now I'm trying to set up the project for distribution, which requires a real webserver. I'm going for Gunicorn (will add Nginx once it works). I managed to run the Gunicorn server properly through the command line using :
gunicorn wsgi:application
However, I need to be able to run the server from my Python script, as the server is ment to be localhost. Gunicorn used to be shipped with a command 'run_gunicorn' designed for Django, but this command is now deprecated.
I tried understanding the following method :
How to use Flask-Script and Gunicorn
But I can't figure out how to make it work with Django.
The following doesn't work:
from django.core.wsgi import get_wsgi_application
from gunicorn.app.base import Application
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
application = get_wsgi_application()
Application().run(application)
Could someone tell me how to start the gunicorn server from my Python script ?
My question is basically what's in the title: how can I setup gunicorn to run a web.py app? (Also, if there are any differences, how would I do it on heroku?)
I already have my app running on heroku using the built in cherrypy, but I have not been able to get gunicorn to work with web.py (I just have no idea where to start - I couldn't find any tutorials).
I'm afraid I'm not familar with Heroku, but I can answer your basic question.
gunicorn is a HTTP server for running Python web apps via WSGI. web.py is a framework for creating Python web apps using WSGI.
So you don't really need a tutorial for using both together, as all you need to do is figure out how to pass the WSGI entry point of your web.py application to gunicorn. A WSGI application is just a Python callable with the right interface i.e. it takes certain parameters and returns a certain response. See this WSGI tutorial for more.
The "hello world" application from the web.py tutorial looks like this test.py:
import web
urls = (
'/', 'index'
)
class index:
def GET(self):
return "Hello, world!"
if __name__ == "__main__":
app = web.application(urls, globals())
app.run()
But that does not expose the WSGI application which gunicorn needs.
web.py provides a WSGI application via the wsgifunc method of web.application. We can add this to test.py by adding the following after the index class:
# For serving using any wsgi server
wsgi_app = web.application(urls, globals()).wsgifunc()
This is basically what the web.py documentation tells you to do in the deployment section, when using Apache + mod_wsgi - the fact that the Python code is the same for us with gunicorn is not a coincidence because this is exactly what WSGI gives you - a standard way to write the Python so that it can be deployed using any WSGI capable server.
As explained in the gunicorn docs, we can then point gunicorn at the wsgi_app member of the test module as follows:
(tmp)day#office:~/tmp$ gunicorn test:wsgi_app
2012-12-03 23:31:11 [19265] [INFO] Starting gunicorn 0.16.1
2012-12-03 23:31:11 [19265] [INFO] Listening at: http://127.0.0.1:8000 (19265)
2012-12-03 23:31:11 [19265] [INFO] Using worker: sync
2012-12-03 23:31:11 [19268] [INFO] Booting worker with pid: 19268