I'm new to twisted, and I'm having trouble working out how I should organise my code. The client connects to a TCP(SSL) control channel and then will try to connect to the same IP:port on UDP for a low-latency data channel, based on encryption settings provided over TCP. If it can't, the TCP control channel will be used for the data. I'd like to write a reusable client such that people can override a class with functions such as dataReceived, controlMessageXReceived, sendControlMessageX, sendDataMessage etc, with whether the UDP channel is in use or not abstracted away into my code.
I currently have a Protocol that can understand the TCP control channel; for testing purposes I've overridden ConnectionMade() there to send set-up messages and confirm everything works (it can understand the server and vice versa) but I have no idea how to integrate that into a wider context.
(For the curious, this is a client for Mumble - this protocol specification is here, and I'm trying to update this horrible pile of umaintainable (multithreaded) code into something modern)
Consider mirroring the protocol/transport separation already present in Twisted.
Protocol doesn't know anything about TCP. It just knows how to handle a stream of bytes. It's the transport that knows about TCP (or TLS, or UNIX sockets, or something else).
There is an explicit interface between Protocol and its transport (actually, there are two - IProtocol lets a transport know what it can do to the protocol object and ITransport lets the protocol know what it can do to the transport object).
Invent an interface that makes sense for the application you're working with. For example, Protocol has dataReceived because "some bytes arrived" is one of the things that happens with "a stream of bytes". What things can happen in Mumble? For example these things might be, "a user connected to the server" or "a message arrived in the channel you're in". Your interface might have a method for each of these.
Now application developers can implement their own novel behavior by writing an implementation of this interface - which is explicitly and completely defined - and then plugging that implementation into your library (for example, perhaps your library could offer a connectToMumbleServer(address, mumbleApplicationObject) API).
Your library knows exactly what it's allowed to do with the application object because the interface is explicitly defined. If you repeat this process for the opposite direction then the application developer will know what they can do to the mumble server using your library, too (eg "join a channel" or "send a packet of audio data").
You could provide a base class (like Protocol) for applications to subclass but this is a very minor bit of convenience. In case you haven't recently, open up twisted/internet/protocols.py and look at the implementation of the Protocol class. There's almost nothing there and none of what is there is very complicated or difficult to replicate. If application developers had to start off subclassing object and type out all of the methods themselves they wouldn't be at much of a disadvantage.
Related
Is it possible, with Python, to communicate directly on the data link layer, prior-to or outside of, an IP address? Similarly to communicating with USB?
I have a client interested in trying this. As far I can know, there's no way. But I never want to underestimate the power of Python.
There's nothing intrinsic about Python which prevents you from writing your own user-level networking stack. However, if you want to say, access the raw ethernet driver to send raw ethernet packets, that has to be supported by the operating system.
I'll try to paint a vague picture of what's going on. Some of this you may know already (or not). A conventional operating system provides an abstraction called the system call layer to allow programs to interact with hardware. This abstraction is typically somewhat "high level" in that it abstracts away some of the details of the hardware. In an operating system which implements Unix abstractions, one of the network abstraction system calls is socket(int domain, int type, int proto), which creates a new socket endpoint. What's getting abstracted away here? Well, for most protocols, dealing with data-link layer details becomes unnecessary. Obviously you lose some flexibility here so that you can gain safety (you don't have to worry about clobbering other OS data structures if you have raw hardware access) and convenience (most people don't need to implement a user-level networking stack).
So, whether it "can" be done without modifying your kernel depends on what abstractions are provided by the OS. Linux provides the packet(7) interface which allows you to use AF_PACKET as your socket domain. According to the man page, "Packet sockets are used to receive or send raw packets at the device driver (OSI Layer 2) level."
So can this be accessed in Python? You bet!
import socket
s = socket(socket.AF_PACKET, socket.SOCK_RAW)
s.bind(("eth1", 0))
s should now be a socket which you can use to send raw packets. See the other Stack Overflow post for more information about how to do this -- they do a better job than I can. It looks like this technique should work on Windows as well, as I suspect they provide a similar abstraction.
I am using twisted to run a rather complicated server that allows for data collection, communication, and commanding of a hardware device remotely. On the client-side there are a number of data retrieval and command operations available. Typically I use the wxpython reactor to interface with the client reactor, but I would also like to setup a simpler command-line style interface.
Is there a reactor that I can use to setup a local non-blocking python-like or raw_input-style interface for the client? After successful access to the server, the server will occasionally send data down without being requested as a result of server-side events.
I have considered manhole, but I am not interested in accessing the server as an interface, I am strictly interested in accessing the client-side data and commands. This is mostly for debugging, but it can also come in handy for creating a much more rudimentary client interface when needed.
See the stdin.py and stdiodemo.py examples, I think that's similar to what you're aiming for. They demonstrate connecting a protocol (like a LineReceiver) to StandardIO.
I think you could also use a StandardIOEndpoint (and maybe we should update the examples for that), but that doesn't change the way you'd write your protocol.
I'm trying to use spyne (http://spyne.io) in my server with ZeroMQ and MsgPack. I've followed the examples to program the server side, but i can't find any example that helps me to know how to program the client side.
I've found the class spyne.client.zeromq.ZeroMQClient , but I don't know what it's supposed to be the 'app' parameter of its constructor.
Thank you in advance!
Edit:
The (simplified) server-side code:
from spyne.application import Application
from spyne.protocol.msgpack import MessagePackRpc
from spyne.server.zeromq import ZeroMQServer
from spyne.service import ServiceBase
from spyne.decorator import srpc
from spyne.model.primitive import Unicode
class RadianteRPC(ServiceBase):
#srpc(_returns=Unicode)
def whoiam():
return "Hello I am Seldon!"
radiante_rpc = Application(
[RadianteRPC],
tns="radiante.rpc",
in_protocol=MessagePackRpc(validator="soft"),
out_protocol=MessagePackRpc()
)
s = ZeroMQServer(radiante_rpc, "tcp://127.0.0.1:5001")
s.serve_forever()
Spyne author here.
There are many issues with the Spyne's client transports.
First and most important being that they require server code to work. And that's because Spyne's wsdl parser is just halfway done, so there's no way to communicate the interface the server exposes to a client.
Once the Wsdl parser is done, Spyne's client transports will be revived as well. They're working just fine though, the tests pass, but they are (slightly) obsolete and, as you noticed, don't have proper docs.
Now back to your question: The app parameter to the client constructor is the same application instance that goes to the server constructor. So if you do this:
c = ZeroMQClient("tcp://127.0.0.1:5001", radiante_rpc)
print c.service.whoiam()
It will print "Hello I am Seldon!"
Here's the full code I just committed: https://github.com/arskom/spyne/tree/master/examples/zeromq
BUT:
All this said, you should not use ZeroMQ for RPC.
I looked at ZeroMQ for RPC purposes back when its hype was up at crazy levels, (I even got my name in ZeroMQ contributors list :)) I did not like what I saw, and I moved on.
Pasting my relevant news.yc comment from https://news.ycombinator.com/item?id=6089252 here:
In my experience, ZeroMQ is very fragile in RPC-like applications,
especially because it tries to abstract away the "connection". This
mindset is very appropriate when you're doing multicast (and ZeroMQ
rocks when doing multicast), but for unicast, I actually want to
detect a disconnection or a connection failure and handle it
appropriately before my outgoing buffers are choked to death. So, I'd
evaluate other alternatives before settling on ZeroMQ as a transport
for internal RPC-type messaging.
If you are fine with having the whole message in memory before parsing
(or sending) it (Http is not that bad when it comes to transferring
huge documents over the network), writing raw MessagePack document to
a regular TCP stream (or tucking it inside a UDP datagram) will do the
trick just fine. MessagePack library does support parsing streams --
see e.g. its Python example in its homepage (http://msgpack.org).
Disclosure: I'm just a happy MessagePack (and sometimes ZeroMQ) user.
I work on Spyne (http://spyne.io) so I just have experience with some
of the most popular protocols out there.
I seem to have written that comment more than a year ago. Fast forward to today, I got the MessagePack transport implemented and released in Spyne 2.11. So if you're looking for a lightweight transport for internally passing small messages, my recommendation would be to use it instead of ZeroMQ.
However, once you're outside the Http-land, you're back to dealing with sockets at the system-level, which may or may not be what you want, depending especially on the amount of resources you have to spare for this bit of your project.
Sadly, there is no documentation about it besides the examples I just put together here: https://github.com/arskom/spyne/tree/master/examples/msgpack_transport
The server code is fairly standard Spyne/Twisted code but the client is using system-level sockets to illustrate how it's supposed to work. I'd happily accept a pull request wrapping it to a proper Spyne client transport.
I hope this helps. Patches are welcome.
Best regards,
i want to build a chat application which supports text messaging, group messaging, file transfer(like netmeeting). when i send it over TCP socket i saw that data is not structured all the data send as string over TCP. I want to send it in a structured way with few headers like name:,ip:,data:,data_type_flag:(file or text message) etc... one stackoverflow member told me to use TELEPATHY but i can't get a simple tutorial to understand. how can i send structured data over socket? or can any one suggest me a good tutorial to implement telepathy properly. i want to communicate over network as peer-to-peer rather than dedicated server.. Thanks
Try google protcol buffers or apache thrift. There are many examples for how to use them.
As for your comment about "peer to peer", please realize that even in peer-to-peer one of the peers is always acting as a server (sometimes both are).
TCP is a transport layer protocol, as opposed to application layer. This means that TCP is not responsible for the types of data you send, only the raw bits. HTTP has headers and other metadata because it is application level.
For a project like the one you're talking about, you will want to implement your own application layer protocol, but this is not exactly a trivial task. I would look at the python source code in the httplib module for an example of how to implement such a protocol, but note that this is likely fairly different still from what you want, as you will want persistent socket connections to be a first-class citizen in a peer-to-peer chat protocol like the one you're describing.
Another option is to use one of the various RPC libraries, eg xmlrpclib, which will handle a decent amount of the required low-level network things for you (although not file transfer; there are other libraries like the ftplib that can do this).
Pickle your data before you send it, and unpickle it on the other end?
http://docs.python.org/library/pickle.html
i am looking for an abstract and clean way to exchange strings between two python programs. The protocol is really simple: client/server sends a string to the server/client and it takes the corresponding action - via a handler, i suppose - and replies OR NOT to the other side with another string. Strings can be three things: an acknowledgement, signalling one side that the other on is still alive; a pickled class containing a command, if going from the "client" to the "server", or a response, if going from the "server" to the "client"; and finally a "lock" command, that signals a side of the conversation that the other is working and no further questions should be asked until another lock packet is received.
I have been looking at the python's built in SocketServer.TCPServer, but it's way too low level, it does not easily support reconnection and the client has to use the socket interface, which i preferred to be encapsulated.
I then explored the twisted framework, particularly the LineOnlyReceiver protocol and server examples, but i found the initial learning curve to be too steep, the online documentation assuming a little too much knowledge and a general lack of examples and good documentation (except the 2005 O'reilly book, is this still valid?).
I then tryied the pyliblo library, which is perfect for the task, alas it is monodirectional, there is no way to "answer" a client, and i need the answer to be associated to the specific command.
So my question is: is there an existing framework/library/module that allows me to have a client object in the server, to read the commands from and send the replies to, and a server object in the client, to read the replies from and send the commands to, that i can use after a simple setup (client, the server address is host:port, server, you are listening on port X) having the underlying socket, reconnection engine and so on handled?
thanks in advance to any answer (pardon my english and inexperience, this is my first question)
Python also provides an asyncchat module that simplifies much of the server/client behavior common to chat-like communications.
What you want to do seems a lot like RPC, so the things that come to my mind are XMLRPC or JSON RPC, if you dont want to use XML .
Python has a XMLRPC library that you can use, it uses HTTP as the transport so it also solves your problem of not being too low level. However if you could provide more detail in terms of what you exactly want to do perhaps we can give a better solution.