Serving Django with SocketIOServer through uWSGI + apache - python

How do I server my Django application using gevent-socketio's SocketIOServer though Apache via uWSGI?
I have the following uWSGI .ini file:
[uwsgi]
socket = 127.0.0.1:3031
master = true
processes = 2
env = DJANGO_SETTINGS_MODULE=demo.settings
module = app:serve
then I have the following app.py:
from gevent import monkey
from socketio.server import SocketIOServer
import django.core.handlers.wsgi
import os
import sys
monkey.patch_all()
PORT = 3031
os.environ['DJANGO_SETTINGS_MODULE'] = 'demo.settings'
def serve():
application = django.core.handlers.wsgi.WSGIHandler()
SocketIOServer(('', PORT), application, namespace="socket.io").serve_forever()
But it just keeps on loading, basically my problem is how do I tell uWSGI to use SocketIOServer when serving?

It is not clear if you want uWSGI to serve both or you want an additional process with the socketio server.
generally you cannot mix blocking apps (like django) with non-blocking (like gevent-based) in the same process and even if you are using monkey patching your database adapter will not be monkeypatched (unless you are using a native python-adapter, and this is uncommon in django).
So i suppose you want to spawn the SocketIOServer as a different process. Just move the last 2 lines out of serve() so the uwsgi importer will parse/run both

Related

How to serve Tornado with Django on waitress?

I have this tornado application wrapped with django function as WSGI app (using in windows)
from django.core.wsgi import get_wsgi_application
from django.conf import settings
from waitress import serve
settings.configure()
wsgi_app = tornado.wsgi.WSGIContainer(django.core.wsgi.WSGIHandler())
def tornado_app():
url = [(r"/models//predict", PHandler),
(r"/models//explain", EHandler),
('.*', tornado.web.FallbackHandler, dict(fallback=wsgi_app))]
return Application(url, db=ELASTIC_URL, debug=options.debug, autoreload=options.debug)
if __name__ == "__main__":
application = tornado_app()
http_server = HTTPServer(application)
http_server.listen(LISTEN_PORT)
IOLoop.current().start()
Not sure how to use waitress,
To serve using waitress I tried http_server = serve(application), server is starting, now sure is it correct, getting error when hitting the endpoint
waitress is a WSGI server; Tornado is not based on or compatible with WSGI. You cannot use waitress to serve the Tornado portions of your application.
To serve both Tornado and WSGI applications in one thread, you need to use Tornado's HTTPServer as you've done in the original example. For better scalability, I'd recommend splitting the Tornado and Django portions of your application into separate processes and putting a proxy like nginx or haproxy in front of them.

Loguru stops logging flask exceptions in production

I am using Loguru to handle the logging in my flask rest api. When testing the app locally it logs exactly as expected. When I deploy my app to my linux server running apache the logging stops. I can run the app manually on the server using python app.py and the logging works again but that just spins up the development server.
from flask import Flask
from loguru import logger
import logging
import os
class InterceptHandler(logging.Handler):
def emit(self, record):
# Retrieve context where the logging call occurred, this happens to be in the 6th frame upward
logger_opt = logger.opt(depth=6, exception=record.exc_info)
logger_opt.log(record.levelno, record.getMessage())
# create the Flask application
app = Flask(__name__)
logger.add(
'logs/events.log',
level='DEBUG',
format='{time} {level} {message}',
backtrace=True,
rotation='5 MB',
retention=9
)
app.logger.addHandler(InterceptHandler())
logging.basicConfig(handlers=[InterceptHandler()], level=20)
if __name__ == '__main__':
app.run(debug=False)
Figured out the issue. By default using the werkzeug dev server it was using the logs/events.log file. When I deployed the application to the apache server, it rerouted the logs that would have been placed here and put them in with the apache server logs

Docker and WSGI with a Python Pyramid app?

I am looking at two articles on how to Dockerize a Pyramid app. I am not that familiar with Python, but I am fairly certain with a Pyramid app you need to use WSGI.
This article uses WSGI:
https://medium.com/#greut/minimal-python-deployment-on-docker-with-uwsgi-bc5aa89b3d35
This one just runs the python executable directly:
https://runnable.com/docker/python/dockerize-your-pyramid-application
It seems unlikely to me that you can run python directly and not incorporate WSGI, can anyone provide an explanation for why the runnable.com article's docker solution would work?
Per the scripts in the second link:
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
~snip~
if __name__ == '__main__':
config = Configurator()
config.add_route('hello', '/')
config.add_view(hello_world, route_name='hello')
app = config.make_wsgi_app() # The wsgi server is configured here
server = make_server('0.0.0.0', 6543, app) # and here
This contains an explanation of why the wsgi server is built in the if __name__=="__main__" block

Flask with gevent multicore

What is the clear way to run flask application with gevent backend server and utilize all processor cores? I have idea to run multiple copies of flask application where gevent WSGIServer listen one port in diapason 5000..5003 (for 4 processes) and nginx as load balancer.
But I'm not sure that this way is the best and may be there are some other ways to do it. For example, master process listen one port and workers process incoming connections.
I'll take a shot!
Nginx!
server section:
location / {
include proxy_params;
proxy_pass http://127.0.0.1:5000;
}
Flask App
This is a simple flask app that i will be using for this example.
myapp.py:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
uWSGI
Okay so I know that you said that you wanted to use gevent, but if you are willing to compromise on that a little bit I think you would be very happy with this setup.
[uwsgi]
master = true
plugin = python
http-socket = 127.0.0.1:5000
workers = 4
wsgi-file = myapp.py
callable = app
Gunicorn
If you must have gevent you might like this little setup
config.py:
import multiprocessing
workers = multiprocessing.cpu_count()
bind = "127.0.0.1:5000"
worker_class = 'gevent'
worker_connections = 30
Then you can run:
gunicorn -c config.py myapp:app
Thats right you have a worker for each cpu and 30 connections per worker.
See if that works for you.
If you are really sold on using nginx as a load balancer try something like this in your http section
upstream backend {
server 127.0.0.1:5000;
server 127.0.0.1:5002;
server 127.0.0.1:5003;
server 127.0.0.1:5004;
}
then one of these in the server section
location / {
include proxy_params;
proxy_pass http://backend;
}
Good Luck buddy!

Tornado web server in webfaction

I am starting with web development. I am trying to develop and webapp using the Instagram API and Django. I was looking that a lot of people it's using Tornado Web Server for Real Time Subscriptions. So I am using Webfaction as a host and found this code so I can wrap my Django project with the "WSGI Container" that Tornado Web Server provides:
import os
import tornado.httpserver
import tornado.ioloop
import tornado.wsgi
import tornado.web
import sys
import django.core.handlers.wsgi
sys.path.append('/path/to/project')
class HelloHandler(tornado.web.RequestHandler):
def get(self):
self.write('Hello from tornado')
def main():
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' # path to your settings module
wsgi_app = tornado.wsgi.WSGIContainer(django.core.handlers.wsgi.WSGIHandler())
tornado_app = tornado.web.Application(
[
('/hello-tornado', HelloHandler),
('.*', tornado.web.FallbackHandler, dict(fallback=wsgi_app)),
]
)
http_server = tornado.httpserver.HTTPServer(tornado_app)
http_server.listen(8080)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
main()
So I run this python script inside my Webfaction server and everytime I try to access "http://mywebsite.com/hello-tornado/" does not seem to work. I know I am running that Tornado web server on that port but do not know how too access from the browser or something like that. What I am doing wrong here? Thanks for your help and patience. Will cyber high-five for every answer.
EDIT: What I am really trying to do is that I want to receive all the calls from the subscriptions that I make with the Instagram RealTime Subscription API through Tornado, for that I have a callback url "http://mysite.com/sub" and I want to be able to receive through Tornado.
You are starting the server at port 8080, Web browsers use port 80 by default, try using: http://mywebsite.com:8080/hello-tornado
if you want to use port 80 and you already have a web server running in the box you can try following Ali-Akber Saifee suggestion, or run the WSGI application directly from the server, using something like mod_python (http://www.modpython.org), you will lose the ability to run Tornado code, but Django will work.
You have to create a custom app (listening on port), note the port that is assigned to your app then configure tornado to serve on that port: http_server.listen(my port)
You can also avoid tornado and start directly by installing a django app.

Categories

Resources