Easy to setup, multi-threaded server for my Django Piston API - python

I am writing an API that reads from MySQL and Solr (which can give latencies of 150ms) to provide formatted output. I will be hosting this on a VPS, and I need to choose a web server for this application. It will be used only within localhost (and local LAN in future).
I have these concerns:
Launches multiple worker threads to minimize bottlenecks with consurrent requests (Solr can take 150ms to return a request)
Can easily respawn when a component crashes and restarting is just a matter of servd -restart
deploying a new application is as simple as copying a folder to the www directory (or equivalent) so that new requests to this app will be served from then on.
I am not optimizing for performance for now, so I need something easy to setup. And is #3 not possible for a non-load balanced Django app?

Gunicorn is very simple to deploy and manage. It has no built-in reloading capability but you could easily use an external utility such as watchdog to monitor a directory and reload gunicorn using kill -HUP <pid>.

Related

What's the advantage of putting nginx in front of uWSGI?

I see a lot of people running their python app, with nginx, which then communicates to nginx. uWSGI can run directly as a web server, and it looks quite fast and scalable, so what's the purpose of putting nginx in front of that?
uWSGI documentation answers this question:
Generally your webserver of choice (Nginx, Mongrel2, etc. will serve
static files efficiently and quickly and will simply forward dynamic
requests to uWSGI backend nodes.
The uWSGI project has ISPs and PaaS (that is, the hosting market) as
the main target, where generally you would want to avoid generating
disk I/O on a central server and have each user-dedicated area handle
(and account for) that itself. More importantly still, you want to
allow customers to customize the way they serve static assets without
bothering your system administrator(s).

Python Flask Application Manager

I am coming from a Java/Tomcat background and was wondering if there is anything out there which could be similar to the Tomcat manager application?
I'm imagining a webapp that I can use to easily deploy and un-deploy Flask based webapps. I guess an analogy to Tomcat would be a WSGI server with a web based manager.
Unfortunately, the deployment story for Python / WSGI is not quite as neat as Java's WAR file based deployment. (And, while Python is not Java that doesn't mean that WAR file deployments aren't nice). So you don't have anything that will quite match your expectations there - but you may be able to cobble together something similar.
First, you'll want a web server that can easily load and unload WSGI applications without requiring a server restart - the one that immediately jumps to mind is uwsgi in emperor mode (and here's an example setup).
Second, you need a consistent way lay out your applications so the WSGI file can be picked up / generated. Something as simple as always having a root-level app.wsgi file that can be copied to the directory being watched by uwsgi will do.
Third, you'll need a script that can take a web application folder / virtualenv and move / symlink it to the "available applications" folder. You'll need another one that can add / symlink, touch (to restart) and remove (to shutdown) the app.wsgi files from the directory(ies) that uwsgi is watching for new vassel applications. If you need to run it across multiple machines (or even just one remote machine) you could use Fabric.
Fourth and finally, you'll need a little web application to enable you to manage the WSGI files for these available applications without using the command line. Since you just spent all this time building some infrastructure for it, why not use Flask and deploy it on itself to make sure everything works?
It's not a pre-built solution, but hopefully this at least points you in the right direction.

Heroku Node.js + Python

I am trying to build a web-app that has both a Python part and a Node.js part. The Python part is a RESTful API server, and the Node.js will use sockets.io and act as a push server. Both will need to access the same DB instance (Heroku Postgres in my case). The Python part will need to talk to the Node.js part in order to send push messages to be delivered to clients.
I have the Python and DB parts built and deployed, running under a "web" dyno. I am not sure how to build the Node part -- and especially how the Python part can talk to the Node.js part.
I am assuming that the Node.js will need to be a new Heroku app, so that it too can run on a 'web' dyno, so that it benefits from the HTTP routing stack, and clients can connect to it. In such a case, will my Python dynos will be accessing it using just like regular clients?
What are the alternatives? How is this usually done?
After having played around a little, and also doing some reading, it seems like Heroku apps that need this have 2 main options:
1) Use some kind of back-end, that both apps can talk to. Examples would be a DB, Redis, 0mq, etc.
2) Use what I suggested above. I actually went ahead and implemented it, and it works.
Just thought I'd share what I've found.

Flask unresponsive while uploading file or waiting for other server

My Flask stops responding when uploading files or when collecting data from another server via GET. I assume, the problem is, that Flask is only running on one thread.
How can I change this, so multiple users can use the site?
Flask's development webserver (invoked when you use app.run) is not a production web server.
Quoting the docs:
You can use the builtin server during development, but you should use a full deployment option for production applications. (Do not use the builtin development server in production.)
If you want to use Flask in a production environment take a look at the deployment options suggested by the documentation.
For testing purposes with small applications that are doing slightly complicated things I deploy the code I'm developing behind CherryPy using this snippet. (The only disadvantage of this pattern is you loose access to Werkzeug's debugger.)

most easy, reliable, cheap way to deploy this python (workhorse) app with a PHP frontend?

I am developing a small part of a PHP application with some python code. The python code runs like an equivalent of a servlet (listens and responds to HTTP on port 8765) on localhost. The PHP app calls it like:
PHP'S_CURL("http://localhost:8765/search?term=electrical+design")
The pyth-let is written with the BaseHTTPServer module like:
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path=="/search":
self.send_response(200)
# ....
self.wfile.write(st)
It works on my workstation and my colleague's. I now want to deploy it in a production environment, with modifications. The idea in mind is that I should:
modify my app to FCGI
get an inexpensive VPS account
set Apache to use FCGI to spawn and keep alive both the PHP app and the pyth-let.
So it's a localhost app, which shouldnt be exposed publically. There should be reliable way to keep it alive. We expect ~800 hits a day before needing an upgrade, so only a single instance need be kept alive.
Is there a feasible way to do this on a popular shared host, rather than a VPS? Am I on the right track with my above-mentioned plan?
Postscipts
I mentioned "easy, reliable, cheap way"
and by "way" I meant both the development direction as well as a good (cheap) hosting plan that can support it.
You can use flup to serve your python app with fastcgi. I've also used gunicorn to deploy python webapps along with supervisor and found that to be a good approach - and even easier to setup.

Categories

Resources