Python test discovery with doctests, coverage and parallelism - python

... and a pony! No, seriously. I am looking for a way to organize tests that "just works". Most things do work, but not all pieces fit together. So here is what I want:
Having tests automatically discovered. This includes doctests. Note that the sum of doctests must not appear as a single test. (i.e. not what py.test --doctest-modules does)
Being able to run tests in parallel. (Something like py.test -n from xdist)
Generating a coverage report.
Make python setup.py test just work.
My current approach involves a tests directory and the load_tests protocol. All files contained are named like test_*.py. This makes python -m unittest discover just work, if I create a file test_doctests.py with the following content.
import doctest
import mymodule1, mymodule2
def load_tests(loader, tests, ignore):
tests.addTests(doctest.DocTestSuite(mymodule1))
tests.addTests(doctest.DocTestSuite(mymodule2))
return tests
This approach also has the upside that one can use setuptools and supply setup(test_suite="unittest2.collector").
However this approach has a few problems.
coverage.py expects to run a script. So I cannot use unittest2 discovery here.
py.test does not run load_tests functions, so it does not find the doctests and the --doctest-modules option is crap.
nosetests runs the load_tests functions, but does not supply any parameters. This appears totally broken on the side of nose.
How can I make things work better than this or fix some of the issues above?

This is an old question, but the problem still persists for some of us! I was just working through it and found a solution similar to kaapstorm's, but with much nicer output. I use py.test to run it, but I think it should be compatible across test runners:
import doctest
from mypackage import mymodule
def test_doctest():
results = doctest.testmod(mymodule)
if results.failed:
raise Exception(results)
What I end up with in a failure case is the printed stdout output that you would get from running doctest manually, with an additional exception that looks like this:
Exception: TestResults(failed=1, attempted=21)
As kaapstrom mentioned, it doesn't count tests properly (unless there are failures) but I find that isn't worth a whole lot provided the coverage numbers come back high :)

I use nose, and found your question when I experienced the same problem.
What I've ended up going with is not pretty, but it does run the tests.
import doctest
import mymodule1, mymodule2
def test_mymodule1():
assert doctest.testmod(mymodule1, raise_on_error=True)
def test_mymodule2():
assert doctest.testmod(mymodule2, raise_on_error=True)
Unfortunately it runs all the doctests in a module as a single test. But if things go wrong, at least I know where to start looking. A failure results in a DocTestFailure, with a useful message:
DocTestFailure: <DocTest mymodule1.myfunc from /path/to/mymodule1.py:63 (4 examples)>

Related

Is it possible to implement multiple test runners in pyunitest? while only running the test suite once

if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)

Read py.test's output as object

Earlier I was using python unittest in my project, and with it came unittest.TextTestRunner and unittest.defaultTestLoader.loadTestsFromTestCase. I used them for the following reasons,
Control the execution of unittest using a wrapper function which calls the unittests's run method. I did not want the command line approach.
Read the unittest's output from the result object and upload the results to a bug tracking system which allow us to generate some complex reports on code stability.
Recently there was a decision made to switch to py.test, how can I do the above using py.test ? I don't want to parse any CLI/HTML to get the output from py.test. I also don't want to write too much code on my unit test file to do this.
Can someone help me with this ?
You can use the pytest's hook to intercept the test result reporting:
conftest.py:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logreport(report):
yield
# Define when you want to report:
# when=setup/call/teardown,
# fields: .failed/.passed/.skipped
if report.when == 'call' and report.failed:
# Add to the database or an issue tracker or wherever you want.
print(report.longreprtext)
print(report.sections)
print(report.capstdout)
print(report.capstderr)
Similarly, you can intercept one of these hooks to inject your code at the needed stage (in some cases, with the try-except around yield):
pytest_runtest_protocol(item, nextitem)
pytest_runtest_setup(item)
pytest_runtest_call(item)
pytest_runtest_teardown(item, nextitem)
pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
Read more: Writing pytest plugins
All of this can be easily done either with a tiny plugin made as a simple installable library, or as a pseudo-plugin conftest.py which just lies around in one of the directories with the tests.
It looks like pytest lets you launch from Python code instead of using the command line. It looks like you just pass the same arguments to the function call that would be on the command line.
Pytest will create resultlog format files, but the feature is deprecated. The documentation suggests using the pytest-tap plugin that produces files in the Test Anything Protocol.

TestSuite vs "test discover"

Is there any difference between creating a TestSuite and add to it all the TestCases, or just running python -m unittest discover in the TestCases directory?
For example, for a directory with two TestCases: test_case_1.py and test_case_2.py:
import unittest
from test_case_1 import TestCaseClass as test1
from test_case_2 import TestCaseClass as test2
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(test1))
suite.addTest(unittest.makeSuite(test2))
unittest.TextTestRunner().run(suite)
Or just cd into that directory and run python -m unittest discover.
I'm getting the same result with either way, but I'm interesting in knowing whether one way is preferred over the other, and why.
I think an obvious benefit in favor of discover is maintenance.
After a month, you get rid of test_case_2 - some of your code above will fail (the import) and you'll have to correct your above script. That's annoying, but not the end of the world.
After two months, someone on your team made test_case_3, but was unaware that they need to add it to the script above. No tests fail, and everyone is happy - the problem is, nothing from test_case_3 actually runs. However, you might counter that it's unreasonable to write new tests, and not notice that they're not running. This brings to the next scenario.
Even worse - after three months, someone merges two versions of your above script, and test_case_3 gets squeezed out again. This might go unnoticed. Until it's corrected, people can work all they want on the stuff that test_case_3 is supposed to check, but it's just untested.

Running the same tests with different configurations

I have some Python code abstracting the database and the business logic on it. This code is already covered by unit tests but now I need to test this code against different DBs (MySQL, SQLite, etc...)
What is the default pattern for passing the same set of tests with different configurations? My goal is making sure that that abstraction layer works as expected independently of the underlying database. If that could help, I'm using nosetests for running tests but it seems that it lacks the Suite Test concept
Best regards.
I like to use test fixtures for situations in which I have several similar tests. In Python, under Nose, I usually implement this as a common test module imported by other modules. For instance, I might use the following file structure:
db_fixtures.py:
import unittest
class BaseDB(unittest.TestCase):
def testFirstOperation(self):
self.db.query("Foo")
def testSecondOperation(self):
self.db.query("Blah")
database_tests.py:
import db_fixtures
class SQliteTest(db_fixtures.BaseDB):
def setUp(self):
self.db = createSqliteconnection()
class MySQLTest(db_fixtures.BaseDB):
def setUp(self):
self.db = createMySQLconnection()
This will run all tests defined in BaseDB on both MySQL and SQlite. Note that I named db_fixtures.py in such a way that it won't be run by Nose.
Nose supports test suites just import and use unittest.TestSuite. In fact nose will happily run any tesys written using the standard lib's unittest module so tesys do not need to be written in the nose style to be discovered by the nose test runner.
However, I suspect you need morw than test suite support to do the kind of testing you are talking about but more detail about your application is necessary to really address that.
use --attrib plugin, and in the commadn line
1. nosetests -s -a 'sqlite'
2. nosetests -s -a 'mysql'

Is it possible to run doctests using unit2

I recently switched from nose to the new unittest2 package for my python unit testing needs. It does everything I want, except from the fact that I can't get its "discover" command to recognize the doctests in my code - I still have to use nose to run them. Is this not implemented or is there something I'm missing here?
Unit2 only discovers regular Python tests. In order to have it run your doctests, I'm afraid you would need to write some minimal boilerplate. Also: the upcoming plugin architecture will make it easy to automate some of these tasks.
In the meantime. you might want to take a look at tox (described here by unittest2 creator) http://www.voidspace.org.uk/python/weblog/arch_d7_2010_07_10.shtml
The boilerplate needed to tell unit2 about your doctests is actually given in the current doctest documentation, though it took me a few minutes to find it:
http://docs.python.org/library/doctest.html#unittest-api
Note that you can pass module names to the DocTestSuite constructor instead of having to import the module yourself, which can cut the length of your boilerplate file in half; it just needs to look like:
from doctest import DocTestSuite
from unittest import TestSuite
def load_tests(loader, tests, pattern):
suite = TestSuite()
suite.addTests(DocTestSuite('my.module.one'))
suite.addTests(DocTestSuite('my.module.two'))
suite.addTests(DocTestSuite('my.module.three'))
return suite

Categories

Resources