I am tasked with writing unit tests for a suite of networked software written in python. Writing units for message builders and other static methods is very simple, but I've hit a wall when it comes to writing a tests for network looped threads.
For example: The server it connects to could be on any port, and I want to be able to test the ability to connect to numerous ports (in sequence, not parallel) without actually having to run numerous servers. What is a good way to approach this? Perhaps make server construction and destruction part of the test? Something tells me there must a simpler answer that evades me.
I have to imagine there are methods for unit testing networked threads, but I can't seem to find any.
I would try to introduce a factory into your existing code that purports to create socket objects. Then in a test pass in a mock factory which creates mock sockets which just pretend they've connected to a server (or not for error cases, which you also want to test, don't you?) and log the message traffic to prove that your code has used the right ports to connect to the right types of servers.
Try not to use threads just yet, to simplify testing.
It depends on how your network software is layered and how detailed you want your tests to be, but it's certainly feasible in some scenarios to make server setup and tear-down part of the test. For example, when I was working on the Python logging package (before it became part of Python), I had a test (I didn't use pyunit/unittest - it was just an ad-hoc script) which fired up (in one test) four servers to listen on TCP, UDP, HTTP and HTTP/SOAP ports, and then sent network traffic to them. If you're interested, the distribution is here and the relevant test script in the archive to look at is log_test.py. The Python logging package has of course come some way since then, but the old package is still around for use with versions of Python < 2.3 and >= 1.5.2.
I've some test cases that run a server in the setUp and close it in the tearDown. I don't know if it is very elegant way to do it but it works of for me.
I am happy to have it and it helps me a lot.
If the server init is very long, an alternative would be to automate it with ant. ant would run/stop the server before/after executing the tests.
See here for very interesting tutorial about ant and python
You would need to create mock sockets. The exact way to do that would depend on how you create sockets and creating a socket generator would be a good idea. You can also use a mocking library like pymox to make your life easier. It can also possibly eliminate the need to create a socket generator just for the sole purpose of testing.
Using pymox, you would do something like this:
def test_connect(self):
m = mox.Mox()
m.StubOutWithMock(socket, 'socket')
socket_mock = m.MockAnything()
m.socket.socket(socket.AF_INET, socket.SOCK_STREAM).AndReturn(socket_mock)
socket_mock.connect(('test_server1', 80))
socket_mock.connect(('test_server2', 81))
socket_mock.connect(('test_server3', 82))
m.ReplayAll()
code_to_be_tested()
m.VerifyAll()
m.UnsetStubs()
Related
The use case is as follows: I have an application, a TCP server on which clients can connect, send and receive information. Clients can send little scripts to be run by the server (that's only a small portion of trusted users who have the right to do that). Notwithstanding the danger of such a situation, I'd like to know how to debug these scripts. And to offer these users power to debug. In short, pdb seems like a good match to me.
But still, I'm facing several problems:
pdb has to not use standard input and output, but the socket connected to the client. In theory it seems doable, by creating a new Pdb object.
pdb has to not freeze the entire program, just offer to examine a specific script (probably a string of lines) and run them asynchronously. Other users shouldn't be frozen.
I've tried to look into the code of the pdb module, but I admit I don't really know whether I can do both things at the same time.
Thanks for your help,
I have a very basic client and server protocols developed using Twisted. Twisted allows to unittest them independently and provides nice testing utils such as the StringTransport for this.
However, let's say I want to test the protocol works fine. For instance, I want to test that when the server receives a certain message, it will reply to the client in some specific way. What is the best way to do that using trial and the utils in Twisted? I am currently launching processes to run them, but then I lose the access to their objects and I need to dump their states in a file to validate the correct behaviour. I don't think this is a clean way to do it. It would be much better to use a StringTransport that simulates a TCP connection from the client to the server. How do twisted developers normally test this?
I'm writing a script that needs to listen to a number of different ports for multicasts. Whenever a message comes in, I want to do different actions depending on which port it came in to. (Eg, log to different files)
My first thought was to use Twisted (or similar), for example by expaning the multicast example in their doco (https://twistedmatrix.com/documents/14.0.0/core/howto/udp.html), but call the one protocol class multiple times, eg:
reactor.listenMulticast(8005, MulticastPingClient(), listenMultiple=True)
reactor.listenMulticast(8006, MulticastPingClient(), listenMultiple=True)
reactor.run()
And then use datagramReceieved to do different actions based on the port. This doesn't work, and I strongly suspect it's not the best approach.
I know this is a broad question, but hopefully what I aim to achieve is clear. I'm not tied to any framework (just Python). Any pointers for an elegant solution would be greatly appreciated. Factories seemed like they would be a reasonable approach, but they (logically) aren't supported for stateless protocols.
I have a program which will be running on multiple devices on a network. These programs will need to send data between each other - to specified devices (not all devices).
server = server.Server('192.168.1.10')
server.identify('device1')
server.send('device2', 'this will be pickled and sent to device2')
That's some basic example code for what I need to do. Of course, it will also need to receive.
I was looking at building my own simple message passing server using Twisted when someone pointed me in the direction of MPI. I've never looked into the MPI protocol before and that website gives rather vague examples.
Is MPI a good approach? Are there better alternatives?
MPI is really good at doing the communications for running a tightly-coupled program accross several or many machines in a cluster. If you're running very loosely coupled programs - only interacting occasionally - or the machines are more distributed than within a cluster, like scattered around a LAN - then MPI is probably not what you're looking for.
There are several Open Source message brokers that already handle this kind of stuff for you, and come with a full API ready to use.
You should take a look at:
ActiveMQ which has a Python Stomp client.
RabbitMQ has a Python client too - see Building RabbitMQ apps using Python.
You could build it yourself, but that would be like reinventing the wheel (and on a side-note: I actually only realised I was half-way building a message broker before I started looking at existing solutions - building one takes a lot of work).
Consider using something like ZeroMQ. It supports the most useful messaging idioms - push/pull, publish/subscribe and so on, and although it's not 100% clear from your question which one you need, I'm pretty sure you will find the answer there.
They have a great user guide here, and the Python bindings are well-developed and supported. Some code samples are here.
You can implement MPI functions in order to create a communication between different codes. In this case the server program should public "MPI ports" with differents IDs. Clients should look for this ports and try to connect to them. Only server can accept each communication. Once the communication is stablished, codes can exchange data between them.
Another posibility is to run different programs in Multiple Instruction MPI option. In this case all programs are executed at the same time, and there is not necessity to create port communicators. After they are executed, you can create particular communicators between groups of programms you select.
Please tell me what kind of method you need and I can provide c code to implement the functions.
Problem:
I have a python script that I have running as a service. It's a subclass of the win32 class win32serviceutil.ServiceFramework. I want a simple straightforward way of sending arbitrary commands to it via the command line.
What I've looked at:
It looks like the standard way of controlling the service once it's started is by using another program and sending it command signals, but I need to be able to send a short string to it as well as an argument. It looks like using a NamedPipe might be a good idea, but it's really too complex for what I wanted to do, is there any other simpler way?
Not really.
You have many, many ways to do "Interprocess Communication" (IPC) in Python.
Sockets
Named Pipes (see http://developers.sun.com/solaris/articles/named_pipes.html) -- it involves a little bit of OS magic to create, but then it's just a file that you read and write.
Shared Memory (see http://en.wikipedia.org/wiki/Shared_memory) -- this also involves a fair amount of OS-level magic.
Semphores and Locks; files with locks can work well for IPC.
Higher-level protocols built on sockets...
HTTP; this is what WSGI is all about.
FTP
etc.
A common solution is to use HTTP and define "RESTful" commands. Your service listens on port 80 for HTTP requests that contain arguments and parameters. Look at wsgiref for more information on this.