Pytest: mock/patch sys.stdin in program using threading with python - python

I've acquired some code that I need to test before refactoring. It uses deep recursion so sets new limits and then runs itself in a fresh thread:
sys.setrecursionlimit(10**6)
threading.stack_size(2**27)
...
threading.Thread(target=main).start()
The code relies heavily on sys.stdin and sys.stdout e.g.
class SpamClass:
def read(self):
self.n = int(sys.stdin.readline())
...
for i in range(self.n):
[a, b, c] = map(int, sys.stdin.readline().split())
...
def write(self)
print(" ".join(str(x) for x in spam()))
To test the code, I need to pass in the contents of a series of input files and compare the results to the contents of some corresponding sample output files.
So far, I've tried three or four different types of mocking and patching without success. My other tests are all written for pytest, so it would be a real nuisance to have to use something else.
I've tried patching module.sys.stdin with StringIO, which doesn't seem to work because pytest's capsys sets sys.stdin to null and hence throws an error despite the patch.
I've also tried using pytest's monkeypatch fixture to replace the module.SpamClss.read method with a function defined in the test, but that produces a segmentation error due, I think, to the thread exiting before the test (or …?).
'pytest test_spam.py' terminated by signal SIGBUS (Misaligned address error)
Any suggestions for how to do this right? Many thanks.

Well, I still don't know what the problem was or if I'm doing this right, but it works for now. I'm not confident the threading aspect is working correctly, but the rest seems fine.
#pytest.mark.parametrize("inputs, outputs", helpers.get_sample_tests('spampath'))
def test_tree_orders(capsys, inputs, outputs):
"""
"""
with patch('module.sys.stdin', StringIO("".join(inputs))):
module.threading.Thread(target=module.main()).start()
captured = capsys.readouterr()
assert "".join(outputs) == captured.out
For anyone else who's interested, it helps to do your debugging prints as print(spam, file=sys.stderr), which you can then access in the test as captured.err, cf. the captured.out used for testing.

Related

How to receive standard out in python program

Long story short, I am writing python code that occasionally causes an underlying module to spit out complaints in the terminal that I want my code to respond to. My question is if there is some way that I can take in all terminal outputs as a string while the program is running so that I might parse it and execute some handler code. Its not errors that crash the program entirely and not a situation where I can simply do a try catch. Thanks for any help!
Edit: Running on Linux
there are several solutions to your need. the easiest would be to use a shared buffer of sort and get all your package output there instead of stdout (with regular print) thus keeping your personal streams under your package control.
since you probably already have some code with print or you want for it to work with minimal change, i suggest use the contextlib.redirect_stdout with a context manager.
give it a shared io.StringIO instance and wrap all your method with it.
you can even create a decorator to do it automatically.
something like:
// decorator
from contextlib import redirect_stdout
import io
import functools
SHARED_BUFFER = io.StringIO()
def std_redirecter(func):
#functools.wrap(func)
def inner(*args, **kwargs):
with redirect_stdout(SHARED_BUFFER) as buffer:
print('foo')
print('bar')
func(*args, **kwargs)
return inner
// your files
#std_redirecter
def writing_to_stdout_func():
print('baz')
// invocation
writing_to_stdout_func()
string = SHARED_BUFFER.getvalue() // 'foo\nbar\nbaz\n'

Testing script output

I have a Python script that accepts one input (a text file): ./myprog.py file.txt. The script outputs a string based on the given input.
I have a set of test files that I would like to test my program with. I know the expected output for each file and want to make sure my script produces the correct output for each file.
What is the generally accepted way to do this type of testing?
I was thinking of using Python's unittest module as the testing framework, and then running my script through subprocess.check_output(stderr=subprocess.STDOUT), capturing stdout and stderr, and then doing a unittest assertEqual to compare the actual and expected strings. I want to make sure I'm not missing some nicer solution.
There's two problems here. Testing a program, as opposed to a library of functions, and testing something that prints, as opposed to values returned from a function. Both make testing more difficult, so it's best to side step these problems as much as possible.
The usual technique is to create a library of functions and then have your program be a thin wrapper around that. These functions return their results, and only the program does the printing. This means you can use normal unit testing techniques for most of the code.
You can have a single file which is both a library and a program. Here's a simple example as hello.py.
def hello(greeting, place):
return greeting + ", " + place + "!"
def main():
print(hello("Hello", "World"))
if __name__ == '__main__':
main()
That last bit is how a file can tell if it was run as a program or if it was imported as a library. It allows access to the individual functions with import hello, and it also allows the file to run as a program. See this answer for more information.
Then you can write a mostly normal unit test.
import hello
import unittest
import sys
from StringIO import StringIO
import subprocess
class TestHello(unittest.TestCase):
def test_hello(self):
self.assertEqual(
hello.hello("foo", "bar"),
"foo, bar!"
)
def test_main(self):
saved_stdout = sys.stdout
try:
out = StringIO()
sys.stdout = out
hello.main()
output = out.getvalue()
self.assertEqual(output, "Hello, World!\n")
finally:
sys.stdout = saved_stdout
def test_as_program(self):
self.assertEqual(
subprocess.check_output(["python", "hello.py"]),
"Hello, World!\n"
)
if __name__ == '__main__':
unittest.main()
Here test_hello is unit testing hello directly as a function; and in a more complicated program there would be more functions to test. We also have test_main to unit test main using StringIO to capture its output. Finally, we ensure the program will run as a program with test_as_program.
The important thing is to test as much of the functionality as functions returning data, and to test as little as possible as printed and formatted strings, and almost nothing via running the program itself. By the time we're actually testing the program, all we need to do is check that it calls main.

Retry .complete() for WrapperTask

I am using Luigi to run several tasks, and then I need to bulk transfer the output to a standardized file location. I've written a WrapperTask with an overridden complete() method to do this:
from luigi.task import flatten
class TaskX(luigi.WrapperTask):
date = luigi.DateParameter()
client = luigi.s3.S3Client()
def requires(self):
yield TaskA(date=self.date)
yield TaskB(date=self.date)
def complete(self):
tasks_complete = all(r.complete() for r in flatten(self.requires()))
## at the end of everything, batch copy the files
if tasks_complete:
self.client.copy('current-old', 'current')
return True
else:
return False
if __name__ == "__main__":
luigi.run()
but I'm having trouble getting conditional part of complete() to be called when the process is actually finished.
I assume this is because of asynchronous behavior pointed out by others, but I'm not sure how to fix it.
I've tried running Luigi with these command-line parameters:
$ PYTHONPATH="" luigi --module x TaskX --worker-retry-external-task
But that doesn't seem to be working correctly. Is this the right approach to handle this type of task?
Also, I'm curious — has anyone had experience with the --worker-retry-external-task command? I'm having some trouble understanding it.
In the source code,
def _is_external(task):
return task.run is None or task.run == NotImplemented
is called to determine whether or not a LuigiTask has a run() method, which a WrapperTask does not. Thus, I'd expect the --retry-external-task flag to retry complete() for this until it's complete, thus performing the action. However, just playing around in the interpreter leads me to believe that:
>>> import luigi_newsletter_process
>>> task = luigi_newsletter_process.Newsletter()
>>> task.run
<bound method Newsletter.run of Newsletter(date=2016-06-22, use_s3=True)>
>>> task.run()
>>> task.run == None
False
>>> task.run() == None
True
This code snippet is not doing what it thinks it is.
Am I off-base here?
I still think that overriding .complete() should in theory have been able to do this, and I'm still not sure why it's not, but if you're just looking for a way to bulk-transfer files after running a process, a workable solution is just to have the transfer take place within a .run() method:
def run(self):
logger.info('transferring into current directory')
self.client.copy('current-old','current')

How to test while-loop (once) with nosetest (Python 2.7)

I'm pretty new to this whole "programming thing" but at age 34 I thought that I'd like to learn the basics.
I unfortunately don't know any python programmers. I'm learning programming due to personal interest (and more and more for the fun of it) but my "social habitat" is not "where the programmers roam" ;) .
I'm almost finished with Zed Shaws "Learn Python the Hard Way" and for the first time I can't figure out a solution to a problem. The last two days I didn't even stumble upon useful hints where to look when I repeatedly rephrased (and searched for) my question.
So stackoverflow seems to be the right place.
Btw.: I lack also the correct vocabular quite often so please don't hesitate to correct me :) . This may be one reason why I can't find an answer.
I use Python 2.7 and nosetests.
How far I solved the problem (I think) in the steps I solved it:
Function 1:
def inp_1():
s = raw_input(">>> ")
return s
All tests import the following to be able to do the things below:
from nose.tools import *
import sys
from StringIO import StringIO
from mock import *
import __builtin__
# and of course the module with the functions
Here is the test for inp_1:
import __builtin__
from mock import *
def test_inp_1():
__builtin__.raw_input = Mock(return_value="foo")
assert_equal(inp_1(), 'foo')
This function/test is ok.
Quite similar is the following function 2:
def inp_2():
s = raw_input(">>> ")
if s == '1':
return s
else:
print "wrong"
Test:
def test_inp_2():
__builtin__.raw_input = Mock(return_value="1")
assert_equal(inp_1(), '1')
__builtin__.raw_input = Mock(return_value="foo")
out = StringIO()
sys.stdout = out
inp_1()
output = out.getvalue().strip()
assert_equal(output, 'wrong')
This function/test is also ok.
Please don't assume that I really know what is happening "behind the scenes" when I use all the stuff above. I have some layman-explanations how this is all functioning and why I get the results I want but I also have the feeling that these explanations may not be entirely true. It wouldn't be the first time that how I think sth. works turns out to be different after I've learned more. Especially everything with "__" confuses me and I'm scared to use it since I don't really understand what's going on. Anyway, now I "just" want to add a while-loop to ask for input until it is correct:
def inp_3():
while True:
s = raw_input(">>> ")
if s == '1':
return s
else:
print "wrong"
The test for inp_3 I thought would be the same as for inp_2 . At least I am not getting error messages. But the output is the following:
$ nosetests
......
# <- Here I press ENTER to provoke a reaction
# Nothing is happening though.
^C # <- Keyboard interrupt (is this the correct word for it?)
----------------------------------------------------------------------
Ran 7 tests in 5.464s
OK
$
The other 7 tests are sth. else (and ok).
The test for inp_3 would be test nr. 8.
The time is just the times passed until I press CTRL-C.
I don't understand why I don't get error- or "test failed"-meassages but just an "ok".
So beside the fact that you may be able to point out bad syntax and other things that can be improved (I really would appreciate it, if you would do this), my question is:
How can I test and abort while-loops with nosetest?
So, the problem here is when you call inp_3 in test for second time, while mocking raw_input with Mock(return_value="foo"). Your inp_3 function runs infinite loop (while True) , and you're not interrupting it in any way except for if s == '1' condition. So with Mock(return_value="foo") that condition is never satisfied, and you loop keeps running until you interrupt it with outer means (Ctrl + C in your example). If it's intentional behavior, then How to limit execution time of a function call in Python will help you to limit execution time of inp_3 in test. However, in cases of input like in your example, developers often implement a limit to how many input attempts user have. You can do it with using variable to count attempts and when it reaches max, loop should be stopped.
def inp_3():
max_attempts = 5
attempts = 0
while True:
s = raw_input(">>> ")
attempts += 1 # this is equal to "attempts = attempts + 1"
if s == '1':
return s
else:
print "wrong"
if attempts == max_attempts:
print "Max attempts used, stopping."
break # this is used to stop loop execution
# and go to next instruction after loop block
print "Stopped."
Also, to learn python I can recommend book "Learning Python" by Mark Lutz. It greatly explains basics of python.
UPDATE:
I couldn't find a way to mock python's True (or a builtin.True) (and yea, that sounds a bit crazy), looks like python didn't (and won't) allow me to do this. However, to achieve exactly what you desire, to run infinite loop once, you can use a little hack.
Define a function to return True
def true_func():
return True
, use it in while loop
while true_func():
and then mock it in test with such logic:
def true_once():
yield True
yield False
class MockTrueFunc(object):
def __init__(self):
self.gen = true_once()
def __call__(self):
return self.gen.next()
Then in test:
true_func = MockTrueFunc()
With this your loop will run only once. However, this construction uses a few advanced python tricks, like generators, "__" methods etc. So use it carefully.
But anyway, generally infinite loops considered to be bad design solutions. Better to not getting used to it :).
It's always important to remind me that infinite loops are bad. So thank you for that and even more so for the short example how to make it better. I will do that whenever possible.
However, in the actual program the infinite loop is how I'd like to do it this time. The code here is just the simplified problem.
I very much appreciate your idea with the modified "true function". I never would have thought about that and thus I learned a new "method" how tackle programming problems :) .
It is still not the way I would like to do it this time, but this was the so important clue I needed to solve my problem with existing methods. I never would have thought about returning a different value the 2nd time I call the same method. It's so simple and brilliant it's astonishing me :).
The mock-module has some features that allows a different value to be returned each time the mocked method is called - side effect .
side_effect can also be set to […] an iterable.
[when] your mock is going to be
called several times, and you want each call to return a different
value. When you set side_effect to an iterable every call to the mock
returns the next value from the iterable:
The while-loop HAS an "exit" (is this the correct term for it?). It just needs the '1' as input. I will use this to exit the loop.
def test_inp_3():
# Test if input is correct
__builtin__.raw_input = Mock(return_value="1")
assert_equal(inp_1(), '1')
# Test if output is correct if input is correct two times.
# The third time the input is corrct to exit the loop.
__builtin__.raw_input = Mock(side_effect=['foo', 'bar', '1'])
out = StringIO()
sys.stdout = out
inp_3()
output = out.getvalue().strip()
# Make sure to compare as many times as the loop
# is "used".
assert_equal(output, 'wrong\nwrong')
Now the test runs and returns "ok" or an error e.g. if the first input already exits the loop.
Thank you very much again for the help. That made my day :)

Testing a failing job in ResizableDispatchQueue with trial

Am on a project using txrdq, am testing (using trial) for a case where a queued job may fail, trial marks the testcase as failed whenever it hits a failure in a errback ..
The errback is a normal behaviour, since a queued job may fail to launch, how to test this case using trial without failing the test ?
here's an example of the test case:
from twisted.trial.unittest import TestCase
from txrdq.rdq import ResizableDispatchQueue
from twisted.python.failure import Failure
class myTestCase(TestCase):
def aFailingJob(self, a):
return Failure("This is a failure")
def setUp(self):
self.queue = ResizableDispatchQueue(self.aFailingJob, 1)
def tearDown(self):
pass
def test_txrdq(self):
self.queue.put("Some argument", 1)
It seems likely that the exception is being logged, since the error handler just raises it. I'm not exactly sure what the error handling code in txrdq looks like, so this is just a guess, but I think it's a pretty good one based on your observations.
Trial fails any unit test that logs an exception, unless the test cleans that exception up after it's logged. Use TestCase.flushLoggedErrors(exceptionType) to deal with this:
def test_txrdq(self):
self.queue.put("Some argument", 1)
self.assertEqual(1, len(self.flushLoggedErrors(SomeException)))
Also notice that you should never do Failure("string"). This is analogous to raise "string". String exceptions are deprecated in Python since a looooong time ago. Always construct a Failure with an exception instance:
class JobError(Exception):
pass
def aFailingJob(self, a):
return Failure(JobError("This is a failure"))
This makes JobError the exception type you'd pass to flushLoggedErrors.
Make sure that you understand whether queue processing is synchronous or asynchronous. If it is synchronous, your test (with the flushLoggedErrors call added) is fine. If it is asynchronous, your error handler may not have run by the time your test method returns. In that case, you're not going to be testing anything useful, and the errors might be logged after the call to flush them (making the flush useless).
Finally, if you're not writing unit tests '''for''' txrdq, then you might not want to write tests like this. You can probably unit test txrdq-using code without using an actual txrdq. A normal Queue object (or perhaps another more specialized test double) will let you more precisely target the units in your application, making your tests faster, more reliable, and easier to debug.
This issue has now (finally!) been solved, by L. Daniel Burr. There's a new version (0.2.14) of txRDQ on PyPI.
By the way, in your test you should add from txrdq.job import Job, and then do something like this:
d = self.queue.put("Some argument", 1)
return self.assertFailure(d, Job)
Trial will make sure that d fails with a Job instance. There are a couple of new tests at the bottom of txrdq/test/test_rdq.py that illustrate this kind of assertion.
I'm sorry this problem caused so much head scratching for you - it was entirely my fault.
Sorry to see you're still having a problem. I don't know what's going on here, but I have been playing with it for over an hour trying to...
The queue.put method returns a Deferred. You can attach an errback to it to do the flush as #exarkun describes, and then return the Deferred from the test. I expected that to fix things (having read #exarkun's reply and gotten a comment from #idnar in #twisted). But it doesn't help.
Here's a bit of the recent IRC conversation, mentioning what I think could be happening: https://gist.github.com/2177560
As far as I can see, txRDQ is doing the right thing. The job fails and the deferred that is returned by queue.put is errbacked.
If you look in _trial_temp/test.log after you run the test, what do you see? I see an error that says Unhandled error in Deferred and the error is a Failure with a Job in it. So it seems likely to me that the error is somewhere in txRDQ. That there is a deferred that's failing and it's passing on the failure just fine to whoever needs it, but also returning the failure - causing trial to complain. But I don't know where that is. I put a print into the init of the Deferred class just out of curiousity to see how many deferreds were made during the running of the test. The answer: 12!
Sorry not to have better news. If you want to press on, go look at every deferred made by the txRDQ code. Is one of them failing with an errback that returns the failure? I don't see it, and I've put print statements in all over the place to check that things are right. I guess I must be missing something.
Thanks, and thanks too #exarkun.

Categories

Resources