I'm seeing an issue in my test suite in trial where everything works fine until I get a timeout. If a test fails due to a timeout, the tearDown function never gets called, leaving the reactor unclean which in turn causes the rest of the tests to fail. I believe tearDown should be called after a timeout, does anyone know why this might happen?
You are correct that tearDown() should be called regardless of what happens in your test. From the documentation for tearDown():
This is called even if the test method raised an exception
However, there is a catch. From the same documentation:
This method will only be called if the setUp() succeeds, regardless of the outcome of the test method.
So it sounds like you perhaps start the reactor in setUp() and when it times out, this is preventing your tearDown() from running - the idea being that whatever you were trying to "set up" in setUp() was not successfully set up, so you do not want to try to tear it down. However, it would be hard to diagnose with certainty unless you provide the code of your setUp and tearDown methods, along with the code of any relevant tests.
It's rather strange because on my box, the teardown executes even if a timeout occurs. The tests should stop running if the reactor is not in a clean state, unless you use the --unclean-warnings flag. Does the test runner stop after the timeout for you? What version of Python and Twisted are you running?
As a side note, if you need to run a unique teardown for a specific test function, there's a very convenient addCleanup() callback. It comes in handy if you need to cancel callback, LoopingCall, or callLater functions so that the reactor isn't in a dirty state. addCleanup returns a Deferred so you can just chain callbacks that perform an adhoc teardown. It might be a good option to try if the class teardown isn't working for you.
PS
I've been so used to writing "well behaved" Twisted code, I don't even recall how to get into an unclean reactor state :D I swear I'm not bragging. Could you provide me a brief summary of what you're doing so that I could test it out on my end?
I found the problem, I'll put this here in case it's helpful to anyone else in the future.
I was returning a deferred from the test that had already been called (as in, deferred.callback had been called), but it still had an unfinished callback chain. From what I can see in the trial code here https://github.com/twisted/twisted/blob/twisted-16.5.0/src/twisted/trial/_asynctest.py#L92, the reactor is crashed when this happens, which explains why the tearDown doesn't get called. The solution for me was to return a deferred from the offending tests that does not have a callback chain that lives for a long time (it's callbacks do not return deferreds themselves).
Related
I'm using Tornado as a coroutine engine for a periodic process, where the repeating coroutine calls ioloop.call_later() on itself at the end of each execution. I'm now trying to drive this with unit tests (using Tornado's gen.test) where I'm mocking the ioloop's time with a local variable t:
DUT.ioloop.time = mock.Mock(side_effect= lambda: t)
(DUT <==> Device Under Test)
Then in the test, I manually increment t, and yield gen.moment to kick the ioloop. The idea is to trigger the repeating coroutine after various intervals so I can verify its behaviour.
But the coroutine doesn't always trigger - or perhaps it yields back to the testing code before completing execution, causing failures.
I think should be using stop() and wait() to synchronise the test code, but I can't see concretely how to use them in this situation. And how does this whole testing strategy work if the DUT runs in its own ioloop?
In general, using yield gen.moment to trigger specific events is dicey; there are no guarantees about how many "moments" you must wait, or in what order the triggered events occur. It's better to make sure that the function being tested has some effect that can be asynchronously waited for (if it doesn't have such an effect naturally, you can use a tornado.locks.Condition).
There are also subtleties to patching IOLoop.time. I think it will work with the default Tornado IOLoops (where it is possible without the use of mock: pass a time_func argument when constructing the loop), but it won't have the desired effect with e.g. AsyncIOLoop.
I don't think you want to use AsyncTestCase.stop and .wait, but it's not clear how your test is set up.
I've added a module to mopidy core that uses gobject.timeout_add() for a repeating function. It works fine when running normally, however when running tests it appears that the handler function never gets called.
The module under test has a method that starts a regular process that emits events roughly every half a second. The test calls the method, then call time.sleep(2). The events from the timer function don't occur, and neither does some debug logging in the timer function. Other events and debug logging (outside the timer function) work fine.
What do I need to do to get gobject.timeout_add() to work in nose tests? Or do I need to use something other than time.sleep() in the test in order to allow the other code to run? It's calling gobject.threads_init() in the test setup, is there anything else I need to do?
You need to be running an event loop. As the documentation for g_timeout_add explains, that function (and other similar functions in glib) will create a new timeout source which is then attached to the event loop (well, the GMainContext, but you don't really need to worry about that). It is not the equivalent of spawning a new thread and having it sleep for whatever you specified as the duration of the timeout, which seems to be the behavior you're expecting—using the main loop allows everything to happen in a single thread.
We have a complex multiservice that needs to do some fairly intricate accounting when shutting down in order to implement a "graceful" shutdown.
I'm trying to write tests for this under trial. The issue is that the reactor is effectively a process-global resource, and shutting down my service means that trial's reactor is also stopped, which (of course) makes it explode.
This is documented to be a no-no in trial, but I need some kind of workaround that allows me to write my tests. My first thought was to use a mock.Mock, but this means we're not really using a reactor that's shutting down, which isn't going to give me behavior that's faithful to the actual shutdown process.
I believe what I need is a way to separate trial's reactor from the reactor of my service-under-test. Sharing a mutable resource between the test system and the system under test is surely an anti-pattern.
There's a difference between shutting down a service and stopping a reactor. You should be able to test most of the desired behavior with myservice.stopService. To test the code that actually initiates the shutdown, just mock out reactor.stop with self.patch(reactor, 'stop', mock.Mock()), and later assert that it was called. If you want to link the two, then have your mock stop call your service's stopService.
Let's say I have this blob of code that's made to be one long-running thread of execution, to poll for events and fire off other events (in my case, using XMLRPC calls). It needs to be refactored into clean objects so it can be unit tested, but in the meantime I want to capture some of its current behavior in some integration tests, treating it like a black box. For example:
# long-lived code
import xmlrpclib
s = xmlrpclib.ServerProxy('http://XXX:yyyy')
def do_stuff():
while True:
...
if s.xyz():
s.do_thing(...)
_
# test code
import threading, time
# stub out xmlrpclib
def run_do_stuff():
other_code.do_stuff()
def setUp():
t = threading.Thread(target=run_do_stuff)
t.setDaemon(True)
def tearDown():
# somehow kill t
t.join()
def test1():
t.start()
time.sleep(5)
assert some_XMLRPC_side_effects
The last big issue is that the code under test is designed to run forever, until a Ctrl-C, and I don't see any way to force it to raise an exception or otherwise kill the thread so I can start it up from scratch without changing the code I'm testing. I lose the ability to poll any flags from my thread as soon as I call the function under test.
I know this is really not how tests are designed to work, integration tests are of limited value, etc, etc, but I was hoping to show off the value of testing and good design to a friend by gently working up to it rather than totally redesigning his software in one go.
The last big issue is that the code under test is designed to run forever, until a Ctrl-C, and I don't see any way to force it to raise an exception or otherwise kill the thread
The point of Test-Driven Development is to rethink your design so that it is testable.
Loop forever -- while seemingly fine for production use -- is untestable.
So make the loop terminate. It won't hurt production. It will improve testability.
The "designed to run forever" is not designed for testability. So fix the design to be testable.
I think I found a solution that does what I was looking for: Instead of using a thread, use a separate process.
I can write a small python stub to do mocking and run the code in a controlled way. Then I can write the actual tests to run my stub in a subprocess for each test and kill it when each test is finished. The test process could interact with the stub over stdio or a socket.
This is what I'm trying to accomplish. I'm making a remote call to a server for information, and I want to block to wait for the info. I created a function that returns a Deferred such that when the RPC comes in with the reply, the deferred is called. Then I have a function called from a thread that goes threads.blockingCallFromThread(reactor, deferredfunc, args).
If something goes wrong - for example, the server goes down - then the call will never un-block. I'd prefer the deferred to go off with an exception in these cases.
I partially succeeded. I have a deferred, onConnectionLost which goes off when the connection is lost. I modified my blocking call function to:
deferred = deferredfunc(args)
self.onConnectionLost.addCallback(lambda _: deferred.errback(
failure.Failure(Exception("connection lost while getting run"))))
result = threads.blockingCallFromThread(
reactor, lambda _: deferred, None)
return result
This works fine. If the server goes down, the connection is lost, and the errback is triggered. However, if the server does not go down and everything shuts down cleanly, onConnectionLost still gets fired, and the anonymous callback here attempts to trigger the errback, causing an AlreadyCalled exception to be raised.
Is there any neat way to check that a deferred has already been fired? I want to avoid wrapping it in a try/except block, but I can always resort to that if that's the only way.
There are ways, but you really shouldn't do it. Your code that is firing the Deferred should be keeping track of whether it's fired the Deferred or not in the associated state. Really, when you fire the Deferred, you should lose track of it so that it can get properly garbage collected; that way you never need to worry about calling it twice, since you won't have a reference to it any more.
Also, it looks like you're calling deferredfunc from the same thread that you're calling blockingCallFromThread. Don't do that; functions which return Deferreds are most likely calling reactor APIs, and those APIs are not thread safe. In fact, Deferred itself is not thread safe. This is why it's blockingCallFromThread, not blockOnThisDeferredFromThread. You should do blockingCallFromThread(reactor, deferredfunc, args).
If you really want errback-if-it's-been-called-otherwise-do-nothing behavior, you may want to cancel the Deferred.