Testing twisted protocol - python

I have a very basic client and server protocols developed using Twisted. Twisted allows to unittest them independently and provides nice testing utils such as the StringTransport for this.
However, let's say I want to test the protocol works fine. For instance, I want to test that when the server receives a certain message, it will reply to the client in some specific way. What is the best way to do that using trial and the utils in Twisted? I am currently launching processes to run them, but then I lose the access to their objects and I need to dump their states in a file to validate the correct behaviour. I don't think this is a clean way to do it. It would be much better to use a StringTransport that simulates a TCP connection from the client to the server. How do twisted developers normally test this?

Related

Twisted Reactor for client-side python interface / raw_input

I am using twisted to run a rather complicated server that allows for data collection, communication, and commanding of a hardware device remotely. On the client-side there are a number of data retrieval and command operations available. Typically I use the wxpython reactor to interface with the client reactor, but I would also like to setup a simpler command-line style interface.
Is there a reactor that I can use to setup a local non-blocking python-like or raw_input-style interface for the client? After successful access to the server, the server will occasionally send data down without being requested as a result of server-side events.
I have considered manhole, but I am not interested in accessing the server as an interface, I am strictly interested in accessing the client-side data and commands. This is mostly for debugging, but it can also come in handy for creating a much more rudimentary client interface when needed.
See the stdin.py and stdiodemo.py examples, I think that's similar to what you're aiming for. They demonstrate connecting a protocol (like a LineReceiver) to StandardIO.
I think you could also use a StandardIOEndpoint (and maybe we should update the examples for that), but that doesn't change the way you'd write your protocol.

Client/Server role reversal with SimpleXMLRPCServer in Python

I'm working on a project to expose a set of methods from various client machines to a server for the purpose of information gathering and automation. I'm using Python at the moment, and SimpleXMLRPCServer seems to work great on a local network, where I know the addresses of the client machines, and there's no NAT or firewall.
The problem is that the client/server model is backwards for what I want to do. Rather than have an RPC server running on the client machine, exposing a service to the software client, I'd like to have a server listening for connections from clients, which connect and expose the service to the server.
I'd thought about tunneling, remote port forwarding with SSH, or a VPN, but those options don't scale well, and introduce more overhead and complexity than I'd like.
I'm thinking I could write a server and client to reverse the model, but I don't want to reinvent the wheel if it already exists. It seems to me that this would be a common enough problem that there would be a solution for it already.
I'm also just cutting my teeth on Python and networked services, so it's possible I'm asking the wrong question entirely.
What you want is probably WAMP routed RPC.
It seems to address your issue and it's very convenient once you get used to it.
The idea is to put the WAMP router (let's say) in the cloud, and both RPC caller and RPC callee are clients with outbound connections to the router.
I was also using VPN for connecting IoT devices together through the internet, but switching to this router model really simplified things up and it scales pretty well.
By the way WAMP is implemented in different languages, including Python.
Maybe Pyro can be of use? It allows for many forms of distributed computing in Python. You are not very clear in your requirements so it is hard to say if this might work for you, but I advise you to have a look at the documentation or the many examples of Pyro and see if there's something that matches what you want to do.
Pyro abstracts most of the networking intricacy away, you simply invoke a method on a (remote) python object.

Handling thousand of persistent TCP connection with python

I need to develop an application in Python handling a few thousand of persistent TCP connection in parallel. Clients connected to the server at bootstrap and send some message (in binary format) from time to time. The server also send both in reply to clients' message and asynchronously some other binary messages. Basically it is a persistent connection initiated by the client because I have no way to reach clients that are behind a NAT.
The question is: which is the libraries/framework i shall consider for this task. Spawning a thread for each client is not an option. I'm not aware of thread pool library for python. I also recently discovered gevent. Which other options do I have?
This link is an excellent read. It lists all the available event driven and asynchronous network frameworks within Python and also has good analysis of the performance for each framework.
It appears that the Tornado framework is one of the most-performant when developing such applications.
Hope this helps
'greenlets' is a leighweight concurrency package. See http://greenlet.readthedocs.org/en/latest/.
Besides greenlets, you might also want to consider multiprocessing. See http://docs.python.org/2/library/multiprocessing.html.

How to write a server using existing version and wireshark?

I decided to improve my knowledge about python network programming and here is the deal: I have a simple server for Windows, which interacts with a client from a mobile device using wi-fi. Also I have a packet sniffer (Wireshark).
Now I want to ask, what do I need to write the Linux version of this server? How to determine the structure of packets, establish the connection? What do I need to use - sockets, Twisted, maybe Tornado?
Start with the SocketServer module and build from there.
Note that this will take a lot of guesswork if there is no documentation about the protocol. If you're lucky, they are using XML or HTML. If not, you will have to make the existing server send a lot of test data which you have to manipulate in some way (by changing fields and see what changes in the data stream).
Good luck!

using pyunit on a network thread

I am tasked with writing unit tests for a suite of networked software written in python. Writing units for message builders and other static methods is very simple, but I've hit a wall when it comes to writing a tests for network looped threads.
For example: The server it connects to could be on any port, and I want to be able to test the ability to connect to numerous ports (in sequence, not parallel) without actually having to run numerous servers. What is a good way to approach this? Perhaps make server construction and destruction part of the test? Something tells me there must a simpler answer that evades me.
I have to imagine there are methods for unit testing networked threads, but I can't seem to find any.
I would try to introduce a factory into your existing code that purports to create socket objects. Then in a test pass in a mock factory which creates mock sockets which just pretend they've connected to a server (or not for error cases, which you also want to test, don't you?) and log the message traffic to prove that your code has used the right ports to connect to the right types of servers.
Try not to use threads just yet, to simplify testing.
It depends on how your network software is layered and how detailed you want your tests to be, but it's certainly feasible in some scenarios to make server setup and tear-down part of the test. For example, when I was working on the Python logging package (before it became part of Python), I had a test (I didn't use pyunit/unittest - it was just an ad-hoc script) which fired up (in one test) four servers to listen on TCP, UDP, HTTP and HTTP/SOAP ports, and then sent network traffic to them. If you're interested, the distribution is here and the relevant test script in the archive to look at is log_test.py. The Python logging package has of course come some way since then, but the old package is still around for use with versions of Python < 2.3 and >= 1.5.2.
I've some test cases that run a server in the setUp and close it in the tearDown. I don't know if it is very elegant way to do it but it works of for me.
I am happy to have it and it helps me a lot.
If the server init is very long, an alternative would be to automate it with ant. ant would run/stop the server before/after executing the tests.
See here for very interesting tutorial about ant and python
You would need to create mock sockets. The exact way to do that would depend on how you create sockets and creating a socket generator would be a good idea. You can also use a mocking library like pymox to make your life easier. It can also possibly eliminate the need to create a socket generator just for the sole purpose of testing.
Using pymox, you would do something like this:
def test_connect(self):
m = mox.Mox()
m.StubOutWithMock(socket, 'socket')
socket_mock = m.MockAnything()
m.socket.socket(socket.AF_INET, socket.SOCK_STREAM).AndReturn(socket_mock)
socket_mock.connect(('test_server1', 80))
socket_mock.connect(('test_server2', 81))
socket_mock.connect(('test_server3', 82))
m.ReplayAll()
code_to_be_tested()
m.VerifyAll()
m.UnsetStubs()

Categories

Resources