The proper way to scale python tornado application - python

I am searching for some way to scale one instance of tornado application to many. I have 5 servers and want to run at each 4 instances of application. The main issue I don't know how to resolve - is to make communication between instances in right way. I see next approaches to make it:
Use memcached for sharing data. I don't think this approach is good, because much traffic would go to server with memcached. Therefore in the future there can be trafic-related issues.
Open sockets between each instance. As for me it will be too hard to maintain such way of communication.
Use tools like ZeroMQ. I am not familiar with this technology. Is it can be the way to scale application between servers?

I'm actually looking at something similar and the thought I have come up with is this. Use the Python Multiprocessing module ( http://docs.python.org/library/multiprocessing.html ) to link the processes together in that way on the individual servers. Then use a memcached server for session specific data. (SessionIDs, IP information, information used to tie the session to a specific user and to the thread of activity they are using) The rest being data driven from a DB instance.

What you could do is for each server you run a memcached instance and a tornado instance. Make the memcached instances "Master replicate" with each other using repcached so each instance of tornado can access memcached data from its machine. Four servers for the tornado and memcached instances and the fifth to run haproxy to load balance the others.
www.haproxy.org/
repcached.lab.klab.org/

Related

How to scale a CPU bound Twisted application?

I'm working on a twisted web application which uploads files and encrypts them, returning the url+key to the user.
I've been tasked with scaling this application. At the moment when there are more than 3-4 concurrent upload requests the performance will drop off significantly.
I'm no Twisted expert but I assume this is due to it running in a single python process, being a high cpu application and the GIL?
How could I go about scaling this?
If this was a different framework such as Flask I would just put uwsgi in front of it and scale the number of processes. Would something similar work for Twisted and if so what tools are generally used for this?
If you think you could throw uwsgi in front of the application, I suppose it is pretty close to shared-nothing. So you can run multiple instances of the program and gain a core's worth of performance from each.
There are a couple really obvious options for exactly how to run the multiple instances. You could have a load balancer in front. You could have the processes share a listening port. There are probably more possibilities, too.
Since your protocol seems to be HTTP, any old HTTP load balancer should be applicable. It needn't be Twisted or Python based itself (though certainly it could be).
If you'd rather share a listening port, Twisted has APIs for passing file descriptors between processes (IReactorSocket) and for launching new processes that inherit a file descriptor from the parent (IReactorProcess).

Twisted + Django as a daemon process plus Django + Apache

I'm working on a distributed system where one process is controlling a hardware piece and I want it to be running as a service. My app is Django + Twisted based, so Twisted maintains the main loop and I access the database (SQLite) through Django, the entry point being a Django Management Command.
On the other hand, for user interface, I am writing a web application on the same Django project on the same database (also using Crossbar as websockets and WAMP server). This is a second Django process accessing the same database.
I'm looking for some validation here. Is anything fundamentally wrong to this approach? I'm particularly scared of issues with database (two different processes accessing it via Django ORM).
Consider that Django, like all WSGI-based web servers, almost always has multiple processes accessing the database. Because a single WSGI process can handle only one connection at a time, it's normal for servers to run multiple processes in parallel when they get any significant amount of traffic.
That doesn't mean there's no cause for concern. You have the database as if data might change between any two calls to it. Familiarize yourself with how Django uses transactions (default is autocommit mode, not atomic requests), and …
and oh, you said sqlite. Yeah, sqlite is probably not the best database to use when you need to write to it from multiple processes. I can imagine that might work for a single-user interface to a piece of hardware, but if you run in to any problems when adding the webapp, you'll want to trade up to a database server like postgresql.
No there is nothing inherently wrong with that approach. We currently use a similar approach for a lot of our work.

Questions about django thread safety

I have a django app which is used for managing registrations to a survey.
There are fixed number of slots and I want to "reserve" slots for users when they sign up.
In one of my views, I get the next available slot and reserve it (or redirect the user if there are no slots available.)
I want to protect against the case where two user's signing up at the same time get registered for the same slot because the the method "get_next_available_slot" returned the same slot for both users.
For this I am trying to understand the use of processes and threads with Django's views.
1) Is each request handled in a separate thread and will using python threading module's LOCK() take care of exclusive access?
2) I am running apache on RHEL with modwsgi. How do I configure Apache/modwsgi to ensure a more easy and simple solution to handle the above situation?
I am not expecting a huge load on the web application at all. So I would like a simpler solution instead of a high performance one.
You should not make assumptions about thread/process setup of django application, because it depends on web server you're using and how django is integrated to it. Therefore, interprocess communication methods should not rely on these details to be reliable. One good solution is using built-in cache library for locks and shared data.
Here's a good example of cache lock ensuring only once instance of celery task is run at a time. You can apply it to regular requests as well.
You shouldn't be worrying about this kind of stuff.
These slots are stored in a database right? The database should handle all the locking mechanisms for you, just make sure you run everything under a transaction and you will be fine.

Sending messages between two Python servers

I have two servers - one Django, the other likely to be written in Python - and one is putting 'tasks' into a database and another is processing these tasks.
They share a database, but I want the processor to react quickly to new tasks rather than polling periodically.
Are there any straightforward ways for two Python servers to talk to one another, or does the task processor have to have web-hooks or something?
It feels there ought to be a blessed way to do this...
Look toward message brokers like ActiveMQ, RabbitMQ, ZeroMQ. They are designed to solve problems similar to what you've described.
I'm working on real-time MMORPG with server part written in Python and our daemons currently queue tasks to each other using ActiveMQ with STOMP protocol.
On low level message brokers keep socket connections to consumers, so it is more efficient than periodical polling.
SimpleXMLRPCServer.
See my answer here: Network programming in Python
You could also use periodic polling (in case stuff gets lots) but xmlrpcserver should be fine for most of the work.
I tend to use polling. If the task table isn't that large it doesn't really involve that much overhead.
Otherwise you can implement a web service, or socket type connections.
You can use SOAPpy to start writing web service stuff, or just extends BaseHTTPServer or something like that to accept messages (HTTP requests) from Django. I do feel that might be more programming than it's worth, but then again, if the tasks only come infrequently it might be the neatest solution.
I would however run my home-build mini-server in some protected environment; only Django should be able to do HTTP requests on there, as it's not easy to build a secure web server.
EDIT
I just thought about Twisted. This may be the perfect network part for your server if you decide not to use a messaging queue (some twisted examples)

A good multithreaded python webserver?

I am looking for a python webserver which is multithreaded instead of being multi-process (as in case of mod_python for apache). I want it to be multithreaded because I want to have an in memory object cache that will be used by various http threads. My webserver does a lot of expensive stuff and computes some large arrays which needs to be cached in memory for future use to avoid recomputing. This is not possible in a multi-process web server environment. Storing this information in memcache is also not a good idea as the arrays are large and storing them in memcache would lead to deserialization of data coming from memcache apart from the additional overhead of IPC.
I implemented a simple webserver using BaseHttpServer, it gives good performance but it gets stuck after a few hours time. I need some more matured webserver. Is it possible to configure apache to use mod_python under a thread model so that I can do some object caching?
CherryPy. Features, as listed from the website:
A fast, HTTP/1.1-compliant, WSGI thread-pooled webserver. Typically, CherryPy itself takes only 1-2ms per page!
Support for any other WSGI-enabled webserver or adapter, including Apache, IIS, lighttpd, mod_python, FastCGI, SCGI, and mod_wsgi
Easy to run multiple HTTP servers (e.g. on multiple ports) at once
A powerful configuration system for developers and deployers alike
A flexible plugin system
Built-in tools for caching, encoding, sessions, authorization, static content, and many more
A native mod_python adapter
A complete test suite
Swappable and customizable...everything.
Built-in profiling, coverage, and testing support.
Consider reconsidering your design. Maintaining that much state in your webserver is probably a bad idea. Multi-process is a much better way to go for stability.
Is there another way to share state between separate processes? What about a service? Database? Index?
It seems unlikely that maintaining a huge array of data in memory and relying on a single multi-threaded process to serve all your requests is the best design or architecture for your app.
Twisted can serve as such a web server. While not multithreaded itself, there is a (not yet released) multithreaded WSGI container present in the current trunk. You can check out the SVN repository and then run:
twistd web --wsgi=your.wsgi.application
Its hard to give a definitive answer without knowing what kind of site you are working on and what kind of load you are expecting. Sub second performance may be a serious requirement or it may not. If you really need to save that last millisecond then you absolutely need to keep your arrays in memory. However as others have suggested it is more than likely that you don't and could get by with something else. Your usage pattern of the data in the array may affect what kinds of choices you make. You probably don't need access to the entire set of data from the array all at once so you could break your data up into smaller chunks and put those chunks in the cache instead of the one big lump. Depending on how often your array data needs to get updated you might make a choice between memcached, local db (berkley, sqlite, small mysql installation, etc) or a remote db. I'd say memcached for fairly frequent updates. A local db for something in the frequency of hourly and remote for the frequency of daily. One thing to consider also is what happens after a cache miss. If 50 clients all of a sudden get a cache miss and all of them at the same time decide to start regenerating those expensive arrays your box(es) will quickly be reduced to 8086's. So you have to take in to consideration how you will handle that. Many articles out there cover how to recover from cache misses. Hope this is helpful.
Not multithreaded, but twisted might serve your needs.
You could instead use a distributed cache that is accessible from each process, memcached being the example that springs to mind.
web.py has made me happy in the past. Consider checking it out.
But it does sound like an architectural redesign might be the proper, though more expensive, solution.
Perhaps you have a problem with your implementation in Python using BaseHttpServer. There's no reason for it to "get stuck", and implementing a simple threaded server using BaseHttpServer and threading shouldn't be difficult.
Also, see http://pymotw.com/2/BaseHTTPServer/index.html#module-BaseHTTPServer about implementing a simple multi-threaded server with HTTPServer and ThreadingMixIn
I use CherryPy both personally and professionally, and I'm extremely happy with it. I even do the kinds of thing you're describing, such as having global object caches, running other threads in the background, etc. And it integrates well with Apache; simply run CherryPy as a standalone server bound to localhost, then use Apache's mod_proxy and mod_rewrite to have Apache transparently forward your requests to CherryPy.
The CherryPy website is http://cherrypy.org/
I actually had the same issue recently. Namely: we wrote a simple server using BaseHTTPServer and found that the fact that it's not multi-threaded was a big drawback.
My solution was to port the server to Pylons (http://pylonshq.com/). The port was fairly easy and one benefit was it's very easy to create a GUI using Pylons so I was able to throw a status page on top of what's basically a daemon process.
I would summarize Pylons this way:
it's similar to Ruby on Rails in that it aims to be very easy to deploy web apps
it's default templating language, Mako, is very nice to work with
it uses a system of routing urls that's very convenient
for us performance is not an issue, so I can't guarantee that Pylons would perform adequately for your needs
you can use it with Apache & Lighthttpd, though I've not tried this
We also run an app with Twisted and are happy with it. Twisted has good performance, but I find Twisted's single-threaded/defer-to-thread programming model fairly complicated. It has lots of advantages, but would not be my choice for a simple app.
Good luck.
Just to point out something different from the usual suspects...
Some years ago while I was using Zope 2.x I read about Medusa as it was the web server used for the platform. They advertised it to work well under heavy load and it can provide you with the functionality you asked.

Categories

Resources