How to unit test a sphinx/docutils directive - python

I have created a Sphinx extension containing a directive.
Now I want to include unit tests before I start to refactor it.
So using pytest I created a test file containing this fixture:
wide_format = __import__('sphinx-jsonschema.wide_format')
#pytest.fixture
def wideformat():
state = Body(StateMachine(None, None))
lineno = 1
app = None
return wide_format.WideFormat(state, lineno, app)
Which when used to create tests fails with errors complaining about the Nones in the instantiation of the StateMachine.
I've tried to install some dummies, looked through docutil samples but can't find a working solution.
The problem is that State and StateMachine contain references to each other and I can't find a way to break that loop.

Related

Python unittest not running specified test

I'm currently working my way through Python Crash Course and have run into a problem during the testing chapter. I've went through with a comb, explained it to my imaginary rubber duck, and can't see at all where I'm going wrong.
Running the test file gives me "Run 0 tests in 0.000s" but no errors that I can see.
The first block is from my file "survey.py", and the second is the test file "testSurvey.py"
Any help is hugely appreciated.
class AnonymousSurvey():
def __init__(self, question):
self.question = question
self.responses = []
def showQuestion(self):
print(self.question)
def storeResponse(self, newResponse):
self.responses.append(newResponse)
def showResults(self):
print("The survey results are")
for response in self.responses:
print("-- " + response)
import unittest
from survey import AnonymousSurvey
class TestAnonymousSurvey(unittest.TestCase):
def TestStoreSingleResponse(self):
question = "What is your favourite language?"
mySurvey = AnonymousSurvey(question)
responses = ["English", "Latin", "Franglais"]
for response in responses:
mySurvey.storeResponse(response)
for response in responses:
self.assertIn(response, mySurvey.responses)
unittest.main()
Your test methods should start with keyword 'test'
like 'test_storing_single_response()'
Pytest identifies method starting with 'test' as a testcase
check out pytest good practice
Conventions for Python test discovery
pytest implements the following standard test discovery:
- If no arguments are specified then collection starts from testpaths
(if configured) or the current directory. Alternatively, command line
arguments can be used in any combination of directories, file names
or node ids.
- Recurse into directories, unless they match norecursedirs.
- In those directories, search for test_*.py or *_test.py files,
imported by their test package name.
- From those files, collect test items:
- test prefixed test functions or methods outside of class
- test prefixed test functions or methods inside Test prefixed test
classes (without an __init__ method)

Py.test - Applying variables to decorators from a csv?

Please bear with me while I try to explain my predicament, I'm still a Python novice and so my terminology may not be correct. Also I'm sorry for the inevitable long-windedness of this post, but I'll try to expalin in as much relevant detail as possible.
A quick rundown:
I'm currently developing a suite of Selenium tests for a set of websites that are essentially the same in functionality, using py.test
Tests results are uploaded to TestRail, using the pytest plugin pytest-testrail.
Tests are tagged with the decorator #pytestrail.case(id) with a unique case ID
A typical test of mine looks like this:
#pytestrail.case('C100123') # associates the function with the relevant TR case
#pytest.mark.usefixtures()
def test_login():
# test code goes here
As I mentioned before, I'm aiming to create one set of code that handles a number of our websites with (virtually) identical functionality, so a hardcoded decorator in the example above won't work.
I tried a data driven approach with a csv and a list of the tests and their case IDs in TestRail.
Example:
website1.csv:
Case ID | Test name
C100123 | test_login
website2.csv:
Case ID | Test name
C222123 | test_login
The code I wrote would use the inspect module to find the name of the test running, find the relevant test ID and put that into a variable called test_id:
import csv
import inspect
class trp(object):
def __init__(self):
pass
with open(testcsv) as f: # testcsv could be website1.csv or website2.csv
reader = csv.reader(f)
next(reader) # skip header
tests = [r for r in reader]
def gettestcase(self):
self.current_test = inspect.stack()[3][3]
for row in trp.tests:
if self.current_test == row[2]:
self.test_id = (row[0])
print(self.test_id)
return self.test_id, self.current_test
def gettestid(self):
self.gettestcase()
The idea was that the decorator would change dynamically based on the csv that I was using at the time.
#pytestrail.case(test_id) # now a variable
#pytest.mark.usefixtures()
def test_login():
trp.gettestid()
# test code goes here
So if I ran test_login for website1, the decorator would look like:
#pytestrail.case('C100123')
and if I ran test_login for website2 the decorator would be:
#pytestrail.case('C222123')
I felt mighty proud of coming up with this solution by myself and tried it out...it didn't work. While the code does work by itself, I would get an exception because test_id is undefined (I understand why - gettestcase is executed after the decorator, so of course it would crash.
The only other way I can handle this is to apply the csv and testIDs before any test code is executed. My question is - how would I know how to associate the tests with their test IDs? What would an elegant, minimal solution to this be?
Sorry for the long winded question. I'll be watching closely to answer any questions if you need more explanation.
pytest is very good at doing all kind of metaprogramming stuff for the tests. If I understand your question correctly, the code below will do the dynamic tests marking with pytestrail.case marker. In the project root dir, create a file named conftest.py and place this code in it:
import csv
from pytest_testrail.plugin import pytestrail
with open('website1.csv') as f:
reader = csv.reader(f)
next(reader)
tests = [r for r in reader]
def pytest_collection_modifyitems(items):
for item in items:
for testid, testname in tests:
if item.name == testname:
item.add_marker(pytestrail.case(testid))
Now you don't need to mark the test with #pytestrail.case()at all - just write the rest of code and pytest will take care of the marking:
def test_login():
assert True
When pytest starts, the code above will read website1.csv and store the test IDs and names just as you did in your code. Before the test run starts, pytest_collection_modifyitems hook will execute, analyzing the collected tests - if a test has the same name as in csv file, pytest will add the pytestrail.case marker with the test ID to it.
I believe the reason this isn't working as you would expect has to do with how python reads and executes files. When python starts executing it reads in the linked python file(s) and executes each line one-by-one, in turn. For things at the 'root' indention level (function/class definitions, decorators, variable assignments, etc) this means they get run exactly one time as they are loaded in, and never again. In your case, the python interpreter reads in the pytest-testrail decorator, then the pytest decorator, and finally the function definition, executing each one once, ever.
(Side note, this is why you should never use mutable objects as function argument defaults: Common Gotchas)
Given that you want to first deduce the current test name, then associate that with a test case ID, and finally use that ID with the decorator, I'm not sure that is possible with pytest-testrail's current functionality. At least, not possible without some esoteric and difficult to debug/maintain hack using descriptors or the like.
I think you realistically have one option: use a different TestRail client and update your pytest structure to use the new client. Two python clients I can recommend are testrail-python and TRAW (TestRail Api Wrapper)(*)
It will take more work on your part to create the fixtures for starting a run, updating results, and closing the run, but I think in the end you will have a more portable suite of tests and better results reporting.
(*) full disclosure: I am the creator/maintainer of TRAW, and also made significant contributions to testrail-python

Separate test cases per input files?

Most test frameworks assume that "1 test = 1 Python method/function",
and consider a test as passed when the function executes without
raising assertions.
I'm testing a compiler-like program (a program that reads *.foo
files and process their contents), for which I want to execute the same test on many input (*.foo) files. IOW, my test looks like:
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def list_testcases(self):
# essentially os.listdir('tests/') and filter *.foo files.
def test_all(self):
for f in self.list_testcases():
one_file(f)
My current code uses
unittest from
Python's standard library, i.e. one_file uses self.assert...(...)
statements to check whether the test passes.
This works, in the sense that I do get a program which succeeds/fails
when my code is OK/buggy, but I'm loosing a lot of the advantages of
the testing framework:
I don't get relevant reporting like "X failures out of Y tests" nor
the list of passed/failed tests. (I'm planning to use such system
not only to test my own development but also to grade student's code
as a teacher, so reporting is important for me)
I don't get test independence. The second test runs on the
environment left by the first, and so on. The first failure stops
the testsuite: testcases coming after a failure are not ran at all.
I get the feeling that I'm abusing my test framework: there's only
one test function so automatic test discovery of unittest sounds
overkill for example. The same code could (should?) be written in
plain Python with a basic assert.
An obvious alternative is to change my code to something like
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def test_file1(self):
one_file("first-testcase.foo")
def test_file2(self):
one_file("second-testcase.foo")
Then I get all the advantages of unittest back, but:
It's a lot more code to write.
It's easy to "forget" a testcase, i.e. create a test file in
tests/ and forget to add it to the Python test.
I can imagine a solution where I would generate one method per testcase dynamically (along the lines of setattr(self, 'test_file' + str(n), ...)), to generate the code for the second solution without having to write it by hand. But that sounds really overkill for a use-case which doesn't seem so complex.
How could I get the best of both, i.e.
automatic testcase discovery (list tests/*.foo files), test
independence and proper reporting?
If you can use pytest as your test runner, then this is actually pretty straightforward using the parametrize decorator:
import pytest, glob
all_files = glob.glob('some/path/*.foo')
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
# do the actual test
This will also automatically name the tests in a useful way, so that you can see which files have failed:
$ py.test
================================== test session starts ===================================
platform darwin -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
[...]
======================================== FAILURES ========================================
_____________________________ test_one_file[some/path/a.foo] _____________________________
filename = 'some/path/a.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
> assert False
E assert False
test_it.py:7: AssertionError
_____________________________ test_one_file[some/path/b.foo] _____________________________
filename = 'some/path/b.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
[...]
Here is a solution, although it might be considered not very beautiful... The idea is to dynamically create new functions, add them to the test class, and use the function names as arguments (e.g., filenames):
# import
import unittest
# test class
class Test(unittest.TestCase):
# example test case
def test_default(self):
print('test_default')
self.assertEqual(2,2)
# set string for creating new function
func_string="""def test(cls):
# get function name and use it to pass information
filename = inspect.stack()[0][3]
# print function name for demonstration purposes
print(filename)
# dummy test for demonstration purposes
cls.assertEqual(type(filename),str)"""
# add new test for each item in list
for f in ['test_bla','test_blu','test_bli']:
# set name of new function
name=func_string.replace('test',f)
# create new function
exec(name)
# add new function to test class
setattr(Test, f, eval(f))
if __name__ == "__main__":
unittest.main()
This correctly runs all four tests and returns:
> test_bla
> test_bli
> test_blu
> test_default
> Ran 4 tests in 0.040s
> OK

Removing cached files after a pytest run

I'm using a joblib.Memory to cache expensive computations when running tests with py.test. The code I'm using reduces to the following,
from joblib import Memory
memory = Memory(cachedir='/tmp/')
#memory.cache
def expensive_function(x):
return x**2 # some computationally expensive operation here
def test_other_function():
input_ds = expensive_function(x=10)
## run some tests with input_ds
which works fine. I'm aware this could be possibly more elegantly done with tmpdir_factory fixture but that's beside the point.
The issue I'm having is how to clean the cached files once all the tests run,
is it possible to share a global variable among all tests (which would contains e.g. a list of path to the cached objects) ?
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
is it possible to share a global variable among all tests (which would contains e.g. a list of path to the cached objects) ?
I wouldn't go down that path. Global mutable state is something best avoided, particularly in testing.
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
Yes, add an auto-used session-scoped fixture into your project-level conftest.py file:
# conftest.py
import pytest
#pytest.yield_fixture(autouse=True, scope='session')
def test_suite_cleanup_thing():
# setup
yield
# teardown - put your command here
The code after the yield will be run - once - at the end of the test suite, regardless of pass or fail.
is it possible to share a global variable among all tests (which would
contains e.g. a list of path to the cached objects) ?
There are actually a couple of ways to do that, each with pros and cons. I think this SO answer sums them up quite nice - https://stackoverflow.com/a/22793013/3023841 - but for example:
def pytest_namespace():
return {'my_global_variable': 0}
def test_namespace(self):
assert pytest.my_global_variable == 0
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
Yes, py.test has teardown functions available:
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method.
"""

Why does the Django Behave test runner say "ignoring label with dot" when I run a single unit test?

I have Behave acceptance tests and unittest/django.test unit tests. I have
TEST_RUNNER = 'django_behave.runner.DjangoBehaveTestSuiteRunner'
in settings.py. I have multiple files of unit tests:
myapp/tests
__init.py__ # empty
tests_a.py
tests_b.py
I want to run one file of unit tests. (Not one feature; I know how to do that.) When I do
python manage.py test myapp.tests.tests_a
I get
Ignoring label with dot in: myapp.tests.tests_a
and then tests_a.py runs. Great! Only the tests I wanted to run ran. But what is the test runner talking about ignoring? I haven't found another invocation that runs the tests I want but doesn't emit the warning. What's going on here?
Django 1.10.2, django-behave 0.1.5.
django-behave allows passing app names like this:
python manage.py test app1 app2
When you do this, it loads the features that belong to each app. You can see that code in django_behave/runner.py. The link I'm giving here points to the latest released version at the time of writing this answer. In that module, you'll find:
def build_suite(self, test_labels, extra_tests=None, **kwargs):
extra_tests = extra_tests or []
#
# Add BDD tests to the extra_tests
#
# always get all features for given apps (for convenience)
for label in test_labels:
if '.' in label:
print("Ignoring label with dot in: %s" % label)
continue
app = get_app(label)
# Check to see if a separate 'features' module exists,
# parallel to the models module
features_dir = get_features(app)
if features_dir is not None:
# build a test suite for this directory
extra_tests.append(self.make_bdd_test_suite(features_dir))
return super(DjangoBehaveTestSuiteRunner, self
).build_suite(test_labels, extra_tests, **kwargs)
When the code runs into a label that has a dot in it, it assumes it is not an app name and just skips it. So you can do:
python manage.py test app1 app2 some.module.name
And some.module.name won't cause django-behave to try to load an app named some.module.name and fail.
The very latest version of the code, which is not released yet, no longer puts out a notice about ignoring labels.

Categories

Resources