I've been inspecting two similar solutions for supporting web sockets via sockJS using an independent Python server, and so far I found two solutions.
I need to write a complex, scalable web socket based web application, and I'm afraid it will be hard to scale Tornado, and it seems Vertx is better with horizontal scaling of web sockets.
I also understand that Redis can be used in conjunction with Tornado for scaling a pub/sub system horizontally, and HAproxy for scaling the SockJS requests.
Between Vertx and Tornado, what is the preferred solution for writing a scalable system which supports SockJS?
Vertx has build-in clustering support. I haven't tried it with many nodes, but it seemed to work well with a few. Internally it uses hazelcast to organise the nodes.
Vertx also runs on a JVM, which has already many monitoring/admin tools which might be useful. So Vertx seems to me like the "batteries included" solution.
You can also use Sockjs Tornado + Rabbit MQ + Memcached in order to scale horizontally.
RabbitMQ brocker will play role of messaging bus from physical Server A to physical Server B.
All information about servers may be stored in memcache. For instance you need to send message M from client-socket C1(A) to client-socket C2(B):
if receiver of A hosted on the same server (by checking memcache), send msg directly using SockJS Router
otherwise send M via RabbitMQ brocker B1(A) (by using routing logic) to B2(A), where SockJS router B can directly send your message to original receiver C2(B).
Since AMQP protocol of RabbitMQ utilizing Erlang, message passing is very stable and quite good for high-load distributed applications. To support my words, look here: http://www.rabbitmq.com/blog/2012/04/25/rabbitmq-performance-measurements-part-2/
Each physical server (with the following power Xenon 4-nodes, MEM 4Gb, HDD- 140 -1000Gb) can handle 3-5 sockjs tornado instances. SockJS implementation also quite good utilizes reverse proxy (HaProxy) via additional params in url.
For distributed testing you can use gemetr, or tsung (based on erlang).
I used this approach in couple of distributed apps.
In addition, don't forget to use Tornado memory as L1 cache.
Related
I'm working on a project to expose a set of methods from various client machines to a server for the purpose of information gathering and automation. I'm using Python at the moment, and SimpleXMLRPCServer seems to work great on a local network, where I know the addresses of the client machines, and there's no NAT or firewall.
The problem is that the client/server model is backwards for what I want to do. Rather than have an RPC server running on the client machine, exposing a service to the software client, I'd like to have a server listening for connections from clients, which connect and expose the service to the server.
I'd thought about tunneling, remote port forwarding with SSH, or a VPN, but those options don't scale well, and introduce more overhead and complexity than I'd like.
I'm thinking I could write a server and client to reverse the model, but I don't want to reinvent the wheel if it already exists. It seems to me that this would be a common enough problem that there would be a solution for it already.
I'm also just cutting my teeth on Python and networked services, so it's possible I'm asking the wrong question entirely.
What you want is probably WAMP routed RPC.
It seems to address your issue and it's very convenient once you get used to it.
The idea is to put the WAMP router (let's say) in the cloud, and both RPC caller and RPC callee are clients with outbound connections to the router.
I was also using VPN for connecting IoT devices together through the internet, but switching to this router model really simplified things up and it scales pretty well.
By the way WAMP is implemented in different languages, including Python.
Maybe Pyro can be of use? It allows for many forms of distributed computing in Python. You are not very clear in your requirements so it is hard to say if this might work for you, but I advise you to have a look at the documentation or the many examples of Pyro and see if there's something that matches what you want to do.
Pyro abstracts most of the networking intricacy away, you simply invoke a method on a (remote) python object.
I'm looking for a High Level Python library for establishing HTTP connections to a Web server.
The connections should ideally remain open (persistant) for sending and receiving two-ways messages, so Websocket are great to me.
As I want it to be compatible will most HTTP proxies, I think about a "fallback" mode with HTTP polling (Comet style).
My problem is I can't find a library for managing these two kinds of connections transparently. Ideally, I would establish the connection to the server with one of the techniques (Websocket or Comet), then simply send/receive messages using the same functions for both types of connections.
I found many Python servers and some Js clients for that purpose, but not in Python.
I looked at : Twisted, Tornado, ZeroMQ, py4ws
Have you taken a look at socket.io? It mainly works with websockets but has plenty of fallbacks and is thus supposed to be supported by all browsers.
For the server side, I've used flask together with gevent-socketio. Miquel Gringberg has recently released flask-socketio extension which is a nice abstraction for working with flask and gevent-socketio. Gevent-socketio is built on the nice gevent library.
gevent-socketio should work fine with other Python frameworks, such as Django and Bottle.
I'm not entirely sure if this fits your bill but probably worth a look.
I think Python Socket-IO client could be a good solution :
https://github.com/invisibleroads/socketIO-client
He can interact easily with a Socket.io NodeJs server, with same paradigms.
I tested it and it can default with Websocket connection and fallbacks to xhr-polling, which is great (I actually tested this features through a proxy).
Example :
with SocketIO('http://127.0.0.1', 7777, Namespace, transports=["websocket", "xhr-polling"], proxies={'http': 'http://localhost:8888'}) as socketIO:
socketIO.on('foo',some_callback_function)
socketIO.emit('bar')
socketIO.wait()
I use PHP, JS, HTML, CSS. I'm willing to learn ruby or python if that is the best option.
My next project will involve live data being fed to users from the server and vice versa. I have shell access on my shared server, but I'm not sure about access to ports. Is it possible to use websockets or any other efficient server-client connection on a shared hosting account, and if so, what do I need to do?
For having the best performance and full control of your setup you need "your own" server.
Today there are a huge amount of virtual server providers which means you get full control over your IP but where the physical server is still shared between many clients, meaning cheaper prices and more flexibility.
I recommend utilizing the free tier program at Amazon EC2, you can always resign after the free period. And they have many geographical locations to choose from.
Another provider in Europe that I have been satisfied with is Tilaa
You can probably find many more alternatives that suits your needs on the Webhosting talk forum
Until some weeks ago, websockets deployment required either a standalone server running on a different port, or server side proxies like varnish/haproxy to listen on port 80 and redirecting normal http traffic. The latest nginx versions added built-in support for websockets, but unless your hosting provider uses it, you're out of luck. (note that I don't have personal experience with this nginx feature)
Personally I find that for most applications, websockets can be replaced with Server-sent events instead - a very lightweight protocol which is basically another http connection that stays open on the server side and sends a stream of plaintext with messages separated by double newlines.
It's supported in most decent browsers, but since this excludes internet explorer there are polyfills available here and here
This covers one side of the connection, the one that is usually implemented with long-polling. The other direction can be covered the usual way with XHR. The end result is very similar to websockets IMO, but with a bit higher latency for client->server messages.
I'm trying to work out how to approach building a "machine" to send and receive messages to WebSphere MQ, via Twisted. I want it to be as generic as possible, so I can reuse it for many different situations that interface with MQ.
I've used Twisted before, but many years ago now and I'm trying to resurrect the knowledge I once had...
The specific problem I'm having is how to implement the MQ IO using Twisted. There's a pymqi Python library that interfaces with MQ, and it provides all the interfaces I need. The MQ calls I need to implement are:
initiate a connection to a specific MQ server/port/channel/queue-manager/queue combination
take content and post it as a message to the desired queue
poll a queue and return the content of the next message in the queue
send a request to a queue manager to find the number of messages currently in a queue
All of these involve blocking calls to MQ.
As I'm intending to reuse the Twisted/MQ interface many times across a range of projects, should I be looking to implement the MQ IO as a Twisted protocol, as a Twisted transport, or just call the pymqi methods via deferToThread() calls? I realise this is a very broad question with possibly no definitive answer; I'm really after advice from those who may have encountered similar challenges before (i.e. working with queueing interfaces that will always block) and found a way that works well.
If you're going to use this functionality a lot, then having a native Twisted implementation is probably worth the effort. A wrapper based on deferToThread will be less work, but it will also be harder to test and debug, perform less well, and have problems on certain platforms where Python threads don't work extremely well (eg FreeBSD).
The approach to take for a native Twisted implementation is probably to implement a protocol that can speak to MQ servers and give it a rich API for interacting with channels, queues, queue managers, etc, and then build a layer on top of that which abstracts the actual network connection away from the application (as I believe mqi/pymqi largely do).
I'm looking for a good server/client protocol supported in Python for making data requests/file transfers between one server and many clients. Security is also an issue - so secure login would be a plus. I've been looking into XML-RPC, but it looks to be a pretty old (and possibly unused these days?) protocol.
If you are looking to do file transfers, XMLRPC is likely a bad choice. It will require that you encode all of your data as XML (and load it into memory).
"Data requests" and "file transfers" sounds a lot like plain old HTTP to me, but your statement of the problem doesn't make your requirements clear. What kind of information needs to be encoded in the request? Would a URL like "http://yourserver.example.com/service/request?color=yellow&flavor=banana" be good enough?
There are lots of HTTP clients and servers in Python, none of which are especially great, but all of which I'm sure will get the job done for basic file transfers. You can do security the "normal" web way, which is to use HTTPS and passwords, which will probably be sufficient.
If you want two-way communication then HTTP falls down, and a protocol like Twisted's perspective broker (PB) or asynchronous messaging protocol (AMP) might suit you better. These protocols are certainly well-supported by Twisted.
ProtocolBuffers was released by Google as a way of serializing data in a very compact efficient way. They have support for C++, Java and Python. I haven't used it yet, but looking at the source, there seem to be RPC clients and servers for each language.
I personally have used XML-RPC on several projects, and it always did exactly what I was hoping for. I was usually going between C++, Java and Python. I use libxmlrpc in Python often because it's easy to memorize and type interactively, but it is actually much slower than the alternative pyxmlrpc.
PyAMF is mostly for RPC with Flash clients, but it's a compact RPC format worth looking at too.
When you have Python on both ends, I don't believe anything beats Pyro (Python Remote Objects.) Pyro even has a "name server" that lets services announce their availability to a network. Clients use the name server to find the services it needs no matter where they're active at a particular moment. This gives you free redundancy, and the ability to move services from one machine to another without any downtime.
For security, I'd tunnel over SSH, or use TLS or SSL at the connection level. Of course, all these options are essentially the same, they just have various difficulties of setup.
Pyro (Python Remote Objects) is fairly clever if all your server/clients are going to be in Python. I use XMPP alot though since I'm communicating with hosts that are not always Python. XMPP lends itself to being extended fairly easily too.
There is an excellent XMPP library for python called PyXMPP which is reasonably up to date and has no dependancy on Twisted.
I suggest you look at 1. XMLRPC 2. JSONRPC 3. SOAP 4. REST/ATOM
XMLRPC is a valid choice. Don't worry it is too old. That is not a problem. It is so simple that little needed changing since original specification. The pro is that in every programming langauge I know there is a library for a client to be written in. Certainly for python. I made it work with mod_python and had no problem at all.
The big problem with it is its verbosity. For simple values there is a lot of XML overhead. You can gzip it of cause, but then you loose some debugging ability with the tools like Fiddler.
My personal preference is JSONRPC. It has all of the XMLRPC advantages and it is very compact. Further, Javascript clients can "eval" it so no parsing is necessary. Most of them are built for version 1.0 of the standard. I have seen diverse attempts to improve on it, called 1.1 1.2 and 2.0 but they are not built one on top of another and, to my knowledge, are not widely supported yet. 2.0 looks the best, but I would still stick with 1.0 for now (October 2008)
Third candidate would be REST/ATOM. REST is a principle, and ATOM is how you convey bulk of data when it needs to for POST, PUT requests and GET responses.
For a very nice implementation of it, look at GData, Google's API. Real real nice.
SOAP is old, and lots lots of libraries / langauges support it. IT is heeavy and complicated, but if your primary clients are .NET or Java, it might be worth the bother.
Visual Studio would import your WSDL file and create a wrapper and to C# programmer it would look like local assembly indeed.
The nice thing about all this, is that if you architect your solution right, existing libraries for Python would allow you support more then one with almost no overhead. XMLRPC and JSONRPC are especially good match.
Regarding authentication. XMLRPC and JSONRPC don't bother defining one. It is independent thing from the serialization. So you can implement Basic Authentication, Digest Authentication or your own with any of those. I have seen couple of examples of client side Digest Authentication for python, but am yet to see the server based one. If you use Apache, you might not need one, using mod_auth_digest Apache module instead. This depens on the nature of your application
Transport security. It is obvously SSL (HTTPS). I can't currently remember how XMLRPC deals with, but with JSONRPC implementation that I have it is trivial - you merely change http to https in your URLs to JSONRPC and it shall be going over SSL enabled transport.
HTTP seems to suit your requirements and is very well supported in Python.
Twisted is good for serious asynchronous network programming in Python, but it has a steep learning curve, so it might be worth using something simpler unless you know your system will need to handle a lot of concurrency.
To start, I would suggest using urllib for the client and a WSGI service behind Apache for the server. Apache can be set up to deal with HTTPS fairly simply.
SSH can be a good choice for file transfer and remote control, especially if you are concerned with secure login. Most Linux and Solaris servers will already run an SSH service for administration, so if your Python program use ssh then you don't need to open up any additional ports or services on remote machines.
OpenSSH is the standard and portable SSH client and server, and can be used via subprocesses from Python. If you want more flexibility Twisted includes Twisted Conch which is a SSH client and server implementation which provides flexible programmable control of an SSH stack, on both Linux and Windows. I use both in production.
I'd use http and start with understanding what the Python library offers.
Then I'd move onto the more industrial strength Twisted library.
There is no need to use HTTP (indeed, HTTP is not good for RPC in general in some respects), and no need to use a standards-based protocol if you're talking about a python client talking to a python server.
Use a Python-specific RPC library such as Pyro, or what Twisted provides (Twisted.spread).
XMLRPC is very simple to get started with, and at my previous job, we used it extensively for intra-node communication in a distributed system. As long as you keep track of the fact that the None value can't be easily transferred, it's dead easy to work with, and included in Python's standard library.
Run it over https and add a username/password parameter to all calls, and you'll have simple security in place. Not sure about how easy it is to verify server certificate in Python, though.
However, if you are transferring large amounts of data, the coding into XML might become a bottleneck, so using a REST-inspired architecture over https may be as good as xmlrpclib.
Facebook's thrift project may be a good answer. It uses a light-weight protocol to pass object around and allows you to use any language you wish. It may fall-down on security though as I believe there is none.
In the RPC field, Json-RPC will bring a big performance improvement over xml-rpc:
http://json-rpc.org/wiki/python-json-rpc