I've just begun learning sockets with Python. So I've written some examples of chat servers and clients. Most of what I've seen on the internet seems to use threading module for (asynchronous) handling of clients' connections to the server. I do understand that for a scalable server you need to use some additional tricks, because thousands of threads can kill the server (correct me if I'm wrong, but is it due to GIL?), but that's not my concern at the moment.
The strange thing is that I've found somewhere in Python documentation that creating subprocesses is the right way (unfortunately I've lost the reference, sorry :( ) for handling sockets.
So the question is: to use threading or multiprocessing? Or is there even better solution?
Please, give me the answer and explain the difference to me.
By the way: I do know that there are things like Twisted which are well-written.
I'm not looking for a pre-made scalable server, I am instead trying to understand how to write one that can be scaled or will deal with at least 10k clients.
EDIT: The operating system is Linux.
Facebook needed a scalable server so they wrote Tornado (which uses async). Twisted is also famously scalable (it also uses async). Gunicorn is also a top performer (it uses multiple processes). None of the fast, scalable tools that I know about uses threading.
An easy way to experiment with the different approaches is to start with the SocketServer module in the standard library: http://docs.python.org/library/socketserver.html . It lets you easily switch approaches by alternately inheriting from either ThreadingMixin or ForkingMixin.
Also, if you're interested in learning about the async approach, the easiest way to build your understanding is to read a blog post discussing the implementation of Tornado: http://golubenco.org/2009/09/19/understanding-the-code-inside-tornado-the-asynchronous-web-server-powering-friendfeed/
Good luck and happy computing :-)
thousands of threads can kill the server (correct me if I'm wrong, but is it due to GIL?)
For one thing, GIL has nothing to do with no. of threads. If you're are doing IO within these threads, you could have hundreds of thousands of these threads without any problem from GIL or otherwise.
GIL comes into play when you have CPU intensive tasks.
See this very informative talk from David Beazly to know more about GIL.
Related
I'm creating this application, and I'm thinking of using Twisted for communication with users via XMPP(Jabber, chat protocol), with the possibility of using other means of communication in the future as well. My application is designed to support, or rather, rely on (independently developed) plugins. Most plugins will spend most of their time doing I/O. Ideally, all plugins would use Deferreds for all their I/O and return immediately(i.e. non-blocking), but I'm concerned that asking plugin-developers to do that is too much a burden, and will slow down and discourage plugin-development. Blocking high-level libraries are much more common(think Facebook or Twitter-libraries), and asking a possibly not-great coder to read up on Deferreds before developing a simple 10 loc Twitter-library doesn't sound like something I want to do.
The Twisted docs state that the maximum default size for the threadPool is 10, and that I should "be careful that you understand threads and their resource usage before drastically altering the thread pool sizes", which I don't think I do (understand), so giving each plugin a thread of its own doesn't seem like a good idea either.
Any suggestions?
Thank you for your help.
[EDIT] A standalone(non-server)-version of the application will also be available. Most plugin-developers will probably be using the standalone version. That's why I'm worried that developers will choose the easy way out, and create blocking plugins.
Don't use threads.
The best example of how to make things easy for people not familiar with Twisted is the way Scrapy defines its plugin interfaces. You never look at a reactor or Deferred or anything - you just define what to do when certain pages are scraped, as callbacks.
Alternately, don't worry about it too much. There are plenty of independently developed protocol support plugins that just use Twisted APIs directly; at the layer of implementing transport protocols, most people who can do it effectively have no problem learning Twisted.
I have to write a litte daemon that can check multiple (could be up to several hundred) email accounts for new messages.
My thoughts so far:
I could just create a new thread for each connection, using imapclient for retrieving the messages every x seconds, or use IMAP IDLE where possible. I also could modify imapclient a bit and select() over all the sockets where IMAP IDLE is activated using a single thread only.
Are there any better approaches for solving this task?
If only you'd asked a few months from now, because Python 3.3.1 will probably have a spiffy new async API. See http://code.google.com/p/tulip/ for the current prototype, but you probably don't want to use it yet.
If you're on Windows, you may be able to handle a few hundred threads without a problem. If so, it's probably the simplest solution. So, try it and see.
If you're on Unix, you probably want to use poll instead of select, because select scales badly when you get into the hundreds of connections. (epoll on linux or kqueue on Mac/BSD are even more scalable, but it doesn't usually matter until you get into the thousands of connections.)
But there are a few things you might want to consider before doing this yourself:
Twisted
Tornado
Monocle
gevent
Twisted is definitely the hardest of these to get into—but it also comes with an IMAP client ready to go, among hundreds of other things, so if you're willing to deal with a bit of a learning curve, you may be done a lot faster.
Tornado feels the most like writing native select-type code. I don't actually know all of the features it comes with; it may have an IMAP client, but if not, you'll be hacking up imapclient the same way you were considering with select.
Monocle sits on top of either Twisted or Tornado, and lets you write code that's kind of like what's coming in 3.3.1, on top of Twisted or Tornado (although actually, you can do the same thing directly in Twisted with inlineCallbacks, it's just that the docs disccourage you from learning that without learning everything else first). Again, you'd be hacking up imapclient here. (Or using Twisted's IMAP client instead… but at that point, you might as well use Twisted directly.)
gevent lets you write code that's almost the same as threaded (or synchronous) code and just magically makes it asynchronous. You may need to hack up imapclient a bit, but it may be as simple as running the magic monkeypatching utility, and that's it. And beyond that, you write the same code you'd write with threading, except that you create a bunch of greenlets instead of a bunch of threads, and you get an order of magnitude or two better scalability.
If you're looking for the absolute maximum scalability, you'll probably want to parallelize and multiplex at the same time (e.g., run 8 processes, each using gevent, on Unix, or attach a native threadpool to IOCP on Windows), but for a few hundred connections this shouldn't be necessary.
This is for a moderation bot for C&C Renegade, in case anyone wants some background.
I have a class which will act as a parent to a load of subclasses that provide IRC connections, connections to the gamelog (UDP socket), etc, and I want to know if it is possible to split some of these subclasses (notably the two socket connections [IRC, gamelog]) into their own threads using the threading module.
If anyone has any suggestions, even if it's just saying it can't be done, I'd appreciate the input.
Tom
Edit: I have experience with working with threaded applications, so I'm not a complete noob, honest.
It is feasible, take a look at:
multiprocessing
Besides the simple process forking, it also provides memory sharing - which is likely to be needed.
The best option would be to run your app with gevent coroutines. Those are much more light-weight than threads and processes. The library has been created based on green threads execution units. Here you can find a good comparison and benchmark of the execution models of Eventlet (A python library that provides a synchronous interface to do asynchronous I/O operations which uses green threads to achieve cooperative sockets) and node.js.
i want to create a python socket server to send and get data to html5 ,
what is the best way to do it , a python socket lib ? or a simple code ?
thanks
#srgerg's socket documentation is useful, but if you want to handle multiple sockets simultaneously, you'll also need other mechanisms, such as select, epoll, or kqueue (depending upon your platform). (You could also spawn multiple processes using fork, or threads if the python threading implementation meets your needs, but both these approaches have enough complications that I'm reluctant to suggest them.)
Another approach is to use Twisted to manage your networking via an event loop, similar to using libevent, but I always found Twisted documentation difficult to follow. Maybe you will have better luck than I did.
The situation is that I have a small datacenter, with each server running python instances. It's not your usual distributed worker setup, as each server has a specific role with an appropriate long-running process.
I'm looking for good ways to implement the the cross-server communication. REST seems like overkill. XML-RPC seems nice, but I haven't played with it yet. What other libraries should I be looking at to get this done?
Requirements:
Computation servers crunch numbers in the background. Other servers would like to occasionally ask them for values, based upon their calculation sets. I know this seems pretty well aligned with a REST mentality, but I'm curious about other options.
Twisted's Perspective Broker is an extremely easy to use and robust mechanism for cross-server communication. It's definitely worth a look.
It wasn't obvious from your question but if getting answers back synchronously doesn't matter to you (i.e., you are just asking for work to be performed) you might want to consider just using a job queue. It's generally the easiest way to communicate between hosts. If you don't mind depending on AWS using SQS is super simple. If you can't depend on AWS then you might want to try something like RabbitMQ. Many times problems that we think need to be communicated synchronously are really just queues in disguise.