Python unittest not running specified test - python

I'm currently working my way through Python Crash Course and have run into a problem during the testing chapter. I've went through with a comb, explained it to my imaginary rubber duck, and can't see at all where I'm going wrong.
Running the test file gives me "Run 0 tests in 0.000s" but no errors that I can see.
The first block is from my file "survey.py", and the second is the test file "testSurvey.py"
Any help is hugely appreciated.
class AnonymousSurvey():
def __init__(self, question):
self.question = question
self.responses = []
def showQuestion(self):
print(self.question)
def storeResponse(self, newResponse):
self.responses.append(newResponse)
def showResults(self):
print("The survey results are")
for response in self.responses:
print("-- " + response)
import unittest
from survey import AnonymousSurvey
class TestAnonymousSurvey(unittest.TestCase):
def TestStoreSingleResponse(self):
question = "What is your favourite language?"
mySurvey = AnonymousSurvey(question)
responses = ["English", "Latin", "Franglais"]
for response in responses:
mySurvey.storeResponse(response)
for response in responses:
self.assertIn(response, mySurvey.responses)
unittest.main()

Your test methods should start with keyword 'test'
like 'test_storing_single_response()'
Pytest identifies method starting with 'test' as a testcase
check out pytest good practice
Conventions for Python test discovery
pytest implements the following standard test discovery:
- If no arguments are specified then collection starts from testpaths
(if configured) or the current directory. Alternatively, command line
arguments can be used in any combination of directories, file names
or node ids.
- Recurse into directories, unless they match norecursedirs.
- In those directories, search for test_*.py or *_test.py files,
imported by their test package name.
- From those files, collect test items:
- test prefixed test functions or methods outside of class
- test prefixed test functions or methods inside Test prefixed test
classes (without an __init__ method)

Related

How to unit test a sphinx/docutils directive

I have created a Sphinx extension containing a directive.
Now I want to include unit tests before I start to refactor it.
So using pytest I created a test file containing this fixture:
wide_format = __import__('sphinx-jsonschema.wide_format')
#pytest.fixture
def wideformat():
state = Body(StateMachine(None, None))
lineno = 1
app = None
return wide_format.WideFormat(state, lineno, app)
Which when used to create tests fails with errors complaining about the Nones in the instantiation of the StateMachine.
I've tried to install some dummies, looked through docutil samples but can't find a working solution.
The problem is that State and StateMachine contain references to each other and I can't find a way to break that loop.

Separate test cases per input files?

Most test frameworks assume that "1 test = 1 Python method/function",
and consider a test as passed when the function executes without
raising assertions.
I'm testing a compiler-like program (a program that reads *.foo
files and process their contents), for which I want to execute the same test on many input (*.foo) files. IOW, my test looks like:
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def list_testcases(self):
# essentially os.listdir('tests/') and filter *.foo files.
def test_all(self):
for f in self.list_testcases():
one_file(f)
My current code uses
unittest from
Python's standard library, i.e. one_file uses self.assert...(...)
statements to check whether the test passes.
This works, in the sense that I do get a program which succeeds/fails
when my code is OK/buggy, but I'm loosing a lot of the advantages of
the testing framework:
I don't get relevant reporting like "X failures out of Y tests" nor
the list of passed/failed tests. (I'm planning to use such system
not only to test my own development but also to grade student's code
as a teacher, so reporting is important for me)
I don't get test independence. The second test runs on the
environment left by the first, and so on. The first failure stops
the testsuite: testcases coming after a failure are not ran at all.
I get the feeling that I'm abusing my test framework: there's only
one test function so automatic test discovery of unittest sounds
overkill for example. The same code could (should?) be written in
plain Python with a basic assert.
An obvious alternative is to change my code to something like
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def test_file1(self):
one_file("first-testcase.foo")
def test_file2(self):
one_file("second-testcase.foo")
Then I get all the advantages of unittest back, but:
It's a lot more code to write.
It's easy to "forget" a testcase, i.e. create a test file in
tests/ and forget to add it to the Python test.
I can imagine a solution where I would generate one method per testcase dynamically (along the lines of setattr(self, 'test_file' + str(n), ...)), to generate the code for the second solution without having to write it by hand. But that sounds really overkill for a use-case which doesn't seem so complex.
How could I get the best of both, i.e.
automatic testcase discovery (list tests/*.foo files), test
independence and proper reporting?
If you can use pytest as your test runner, then this is actually pretty straightforward using the parametrize decorator:
import pytest, glob
all_files = glob.glob('some/path/*.foo')
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
# do the actual test
This will also automatically name the tests in a useful way, so that you can see which files have failed:
$ py.test
================================== test session starts ===================================
platform darwin -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
[...]
======================================== FAILURES ========================================
_____________________________ test_one_file[some/path/a.foo] _____________________________
filename = 'some/path/a.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
> assert False
E assert False
test_it.py:7: AssertionError
_____________________________ test_one_file[some/path/b.foo] _____________________________
filename = 'some/path/b.foo'
#pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
[...]
Here is a solution, although it might be considered not very beautiful... The idea is to dynamically create new functions, add them to the test class, and use the function names as arguments (e.g., filenames):
# import
import unittest
# test class
class Test(unittest.TestCase):
# example test case
def test_default(self):
print('test_default')
self.assertEqual(2,2)
# set string for creating new function
func_string="""def test(cls):
# get function name and use it to pass information
filename = inspect.stack()[0][3]
# print function name for demonstration purposes
print(filename)
# dummy test for demonstration purposes
cls.assertEqual(type(filename),str)"""
# add new test for each item in list
for f in ['test_bla','test_blu','test_bli']:
# set name of new function
name=func_string.replace('test',f)
# create new function
exec(name)
# add new function to test class
setattr(Test, f, eval(f))
if __name__ == "__main__":
unittest.main()
This correctly runs all four tests and returns:
> test_bla
> test_bli
> test_blu
> test_default
> Ran 4 tests in 0.040s
> OK

How to get test cases list in Robot Framework without launching the actual tests?

I have file test.robot with test cases.
How can i get the list of this test cases without activating the tests, from command line or python?
Robot test suites are easy to parse with the robot parser:
from robot.parsing.model import TestData
suite = TestData(parent=None, source=path_to_test_suite)
for testcase in suite.testcase_table:
print(testcase.name)
You can check out testdoc tool. Like explained in the doc, "The created documentation is in HTML format and it includes name, documentation and other metadata of each test suite and test case".
For v3.2 and up:
In RobotFramework 3.2 the parsing APIs have been rewritten, so the answer from Bryan Oakley won't work on these versions anymore.
The proper code that is compatible with both pre-3.2 and post-3.2 versions is the following:
from robot.running import TestSuiteBuilder
from robot.model import SuiteVisitor
class TestCasesFinder(SuiteVisitor):
def __init__(self):
self.tests = []
def visit_test(self, test):
self.tests.append(test)
builder = TestSuiteBuilder()
testsuite = builder.build('testsuite/')
finder = TestCasesFinder()
testsuite.visit(finder)
print(*finder.tests)
Further reading:
Visitor model
TestSuiteBuilder class reference

Use pytest to test and grade student code

Say I want to grade some student python code using tests, something like (this is pseudo-code I wish I could write):
code = __import__("student_code") # Import the code to be tested
grade = 100
for test in all_tests(): # Loop over the tests that were gathered
good = perform(test, code) # Perform the test individually on the code
if not good: # Do something if the code gives the wrong result
grade -= 1
For that, I would like to use pytest (easy to write good tests), but there are many things I don't know how to do:
how to run tests on external code? (here the code imported from student's code)
how to list all the tests available? (here all_tests())
how to run them individually on code? (here perform(test, code))
I couldn't find anything related to this user-case (pytest.main() does not seem to do the trick anyhow...)
I hope you see my point, cheers!
EDIT
I finally found how to perform my 1st point (apply tests on external code). In the repository where you want to perform the tests, generate a conftest.py file with:
import imp # Import standard library
import pytest
def pytest_addoption(parser):
"""Add a custom command-line option to py.test."""
parser.addoption("--module", help="Code file to be tested.")
#pytest.fixture(scope='session')
def module(request):
"""Import code specified with the command-line custom option '--module'."""
codename = request.config.getoption("--module")
# Import module (standard __import__ does not support import by filename)
try:
code = imp.load_source('code', codename)
except Exception as err:
print "ERROR while importing {!r}".format(codename)
raise
return code
Then, gather your tests in a tests.py file, using the module fixture:
def test_sample(module):
assert module.add(1, 2) == 3
Finally, run the tests with py.test tests.py --module student.py.
I'm still working on points 2 and 3.
EDIT 2
I uploaded my (incomplete) take at this question:
https://gitlab.in2p3.fr/ycopin/pyTestExam
Help & contributions are welcome!
Very cool and interesting project. It's difficult to answer without knowing more.
Basically you should be able to do this by writing a custom plugin. Probably something you could place in a conftest.py in a test or project folder with your unittest subfolder and a subfolder for each student.
Probably would want to write two plugins:
One to allow weighting of tests (e.g. test_foo_10 and and test_bar_5) and calculation of final grade (e.g. 490/520) (teamcity-messages is an example that uses the same hooks)
Another to allow distribution of test to separate processes. (xdist as an example)
I know this is not a very complete answer but I wanted to at least point out the last point. Since there would be a very high probability that students would be using overlapping module names, they would clash in a pytest world where tests are collected first and then run in a process that would attempt to not re-import modules with a common name.
Even if you attempt to control for that, you will eventually have a student manipulate the global namespace in a sloppy way that could cause another students code to fail. You will, for that reason, need either a bash script to run each students file or a plugin that would run them in separate processes.
Make this use case a graded take-home exam and see what they come up with :-) ... naaah ... but you could ;-)
I come up with something like this (whre it is assument that the sum function is the student code):
import unittest
score = 0
class TestAndGrade(unittest.TestCase):
def test_correctness(self):
self.assertEqual(sum([2,2]), 4)
global score; score += 6 # Increase score
def test_edge_cases(self):
with self.assertRaises(Exception):
sum([2,'hello'])
global score; score += 1 # Increase score
# Print the score
#classmethod
def tearDownClass(cls):
global score
print('\n\n-------------')
print('| Score: {} |'.format(score))
print('-------------\n')
# Run the tests
unittest.main()

Error when running Python parameterized test method

IDE: PyCharm Community Edition 3.1.1
Python: 2.7.6
I using DDT for test parameterization http://ddt.readthedocs.org/en/latest/example.html
I want to choose and run parameterized test method from test class in PyCharm -> see example:
from unittest import TestCase
from ddt import ddt, data
#ddt
class Test_parameterized(TestCase):
def test_print_value(self):
print 10
self.assertIsNotNone(10)
#data(10, 20, 30, 40)
def test_print_value_parametrized(self, value):
print value
self.assertIsNotNone(value)
When I navigate to the first test method test_print_value in code and hit ctrl+Shift+F10 (or use Run Unittest test_print... option from context menu)
then test is executed.
When I try the same with parameterized test I get error:
Test framework quit unexpectedly
And output contains:
/usr/bin/python2 /home/s/App/pycharm-community-3.1.1/helpers/pycharm/utrunner.py
/home/s/Documents/Py/first/fib/test_parametrized.py::Test_parameterized::test_print_value_parametrized true
Testing started at 10:35 AM ...
Traceback (most recent call last):
File "/home/s/App/pycharm-community-3.1.1/helpers/pycharm/utrunner.py", line 148, in <module>
testLoader.makeTest(getattr(testCaseClass, a[2]), testCaseClass))
AttributeError: 'TestLoader' object has no attribute 'makeTest'
Process finished with exit code 1
However when I run all tests in class (by navigating to test class name in code and using mentioned run test option) all parameterized and non parameterized tests are executed together without errors.
The problem is how to independently run prameterized method from the test class - workaround is putting one parameterized test per test class but it is rather messy solution.
Actually this is issue in PyCharm utrunner.py who runs unittests. If you are using DDT there is a wrapper #ddt and #data - it is responsible for creating separate tests for each data entry. In the background these tests have different names e.g.
#ddt
class MyTestClass(unittest.TestCase):
#data(1, 2)
def test_print(self, command):
print command
This would create tests named:
- test_print_1_1
- test_print_2_2
When you try to run one test from the class (Right Click -> Run 'Unittest test_print') PyCharm has a problem to load your tests print_1_1, print_2_2 as it is trying to load test_print test.
When you look at the code of utrunner.py:
if a[1] == "":
# test function, not method
all.addTest(testLoader.makeTest(getattr(module, a[2])))
else:
testCaseClass = getattr(module, a[1])
try:
all.addTest(testCaseClass(a[2]))
except:
# class is not a testcase inheritor
all.addTest(
testLoader.makeTest(getattr(testCaseClass, a[2]), testCaseClass))
and you will debug it you see that issue.
Ok. So my fix for that is to load proper tests from the class. It is just a workaround and it is not perfect, however as DDT is adding a TestCase as another method to the class it is hard to find a different way to detect right test cases than comparing by string. So instead of:
try:
all.addTest(testCaseClass(a[2]))
you can try to use:
try:
all_tests = testLoader.getTestCaseNames(getattr(module, a[1]))
for test in all_tests:
if test.startswith(a[2]):
if test.split(a[2])[1][1].isdigit():
all.addTest(testLoader.loadTestsFromName(test, getattr(module,a[1])))
Checking if digit is found after the main name is a workaround to exclude similar test cases:
test_print
test_print_another_case
But of course it would not exclude cases:
test_if_prints_1
test_if_prints_2
So in the worst case, if we haven't got a good name convention we will run similar tests, but in most cases it should just work for you.
When I ran into this error, it was because I had implemented an init function as follows:
def __init__(self):
super(ClassInheritingTestCase, self).__init__()
When I changed it to the following, it worked properly:
def __init__(self, *args, **kwargs):
super(ClassInheritingTestCase, self).__init__(*args, **kwargs)
The problem was caused by me not propagating the *args and **kwargs through properly.

Categories

Resources