i run
nosetests -v --nocapture --nologcapture tests/reward/test_stuff.py
i get
----------------------------------------------------------------------
Ran 7 tests in 0.005s
OK
the tests without decorators run fine, however i have some fixture tests setup liike so which aren't running in the above command:
#use_fixtures(fixtures.ap_user_data, fixtures.ap_like_data, fixtures.other_reward_data)
def test_settings_are_respected(data):
"""
Tests setting takes effect
"""
res = func()
assert len(res) > 0
the decorator is:
def use_fixtures(*fixtures):
"""
Decorator for tests, abstracts the build-up required when using fixtures
"""
def real_use_fixtures(func):
def use_fixtures_inner(*args, **kwargs):
env = {}
for cls in fixtures:
name = cls.__name__
table = name[:-5] #curtails _data
env[table] = get_orm_class(table)
fixture = SQLAlchemyFixture(env=env, engine=engine, style=TrimmedNameStyle(suffix="_data"))
return fixture.with_data(*fixtures)(func)()
return use_fixtures_inner
return real_use_fixtures
is my use of decorators stopping nosetests from running my tests ?
I don't know if your decorator is causing nose to miss tests, but what if you take a test with a decorator, and remove the decorator long enough to see if it's still missed?
Related
I have the minimal working example below for dynamically generating test cases and running them using nose.
class RegressionTests(unittest.TestCase):
"""Our base class for dynamically created test cases."""
def regression(self, input):
"""Method that runs the test."""
self.assertEqual(1, 1)
def create(input):
"""Called by create_all below to create each test method."""
def do_test(self):
self.regression(input)
return do_test
def create_all():
"""Create all of the unit test cases dynamically"""
logging.info('Start creating all unit tests.')
inputs = ['A', 'B', 'C']
for input in inputs:
testable_name = 'test_{0}'.format(input)
testable = create(input)
testable.__name__ = testable_name
class_name = 'Test_{0}'.format(input)
globals()[class_name] = type(class_name, (RegressionTests,), {testable_name: testable})
logging.debug('Created test case %s with test method %s', class_name, testable_name)
logging.info('Finished creating all unit tests.')
if __name__ == '__main__':
# Create all the test cases dynamically
create_all()
# Execute the tests
logging.info('Start running tests.')
nose.runmodule(name='__main__')
logging.info('Finished running tests.')
When I run the tests using python nose_mwe.py --nocapture --verbosity=2, they run fine and I get the output:
test_A (__main__.Test_A) ... ok
test_B (__main__.Test_B) ... ok
test_C (__main__.Test_C) ... ok
However, when I try to use the processes command line parameter to make the tests run in parallel, e.g. python nose_mwe.py --processes=3 --nocapture --verbosity=2, I get the following errors.
Failure: ValueError (No such test Test_A.test_A) ... ERROR
Failure: ValueError (No such test Test_B.test_B) ... ERROR
Failure: ValueError (No such test Test_C.test_C) ... ERROR
Is there something simple that I am missing here to allow the dynamically generated tests to run in parallel?
as far as I can tell you just need to make sure that create_all is run in every test process. just moving it out of the __main__ test works for me, so the end of the file would look like:
# as above
# Create all the test cases dynamically
create_all()
if __name__ == '__main__':
# Execute the tests
logging.info('Start running tests.')
nose.runmodule(name='__main__')
logging.info('Finished running tests.')
I'm trying pytest parametrization with pytest_generate_tests():
conftest.py
def pytest_generate_tests(metafunc):
if 'cliautoconfigargs' in metafunc.fixturenames:
metafunc.parametrize(
'cliautoconfigargs', list(<some list of params>))
)
test_cliautoconfig.py
def test_check_conf_mode(cliautoconfigargs):
assert True
def test_enable_disable_command(cliautoconfigargs):
assert True
In such configuration each test running with all its parameters and only after it completed the next test with its parameters starting. I'd like to configure testing in such way, when all tests should be cyclicaly run with their first parameter, then with second parameter, and so on.
For example a have the following output:
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs0]
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs1]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs0]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs1]
I want to have the next one:
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs0]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs0]
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs1]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs1]
Sory for issue dublication.
Found answer in maintaining order of test execution when parametrizing tests in test class
conftest.py
def pytest_generate_tests(metafunc):
if 'cliautoconfigargs' in metafunc.fixturenames:
metafunc.parametrize(
'cliautoconfigargs', list(<some list of params>), scope="class"
)
test_cliautoconfig.py
class TestCommand:
def test_check_conf_mode(self, cliautoconfigargs):
assert True
def test_enable_disable_command(self, cliautoconfigargs):
assert True
I have a function that I don't want to run every time I run tests in my Flask-RESTFul API. This is an example of the setup:
class function(Resource):
def post(self):
print 'test'
do_action()
return {'success':True}
in my test I want to run this function, but ignore do_action(). How would I make this happen using pytest?
This seems like a good opportunity to mark the tests
#pytest.mark.foo_test
class function(Resource):
def post(self):
print 'test'
do_action()
return {'success':True}
Then if you call with
py.test -v -m foo_test
It will run only those tests marked "foo_test"
If you call with
py.test -v -m "not foo_test"
It will run all tests not marked "foo_test"
You can mock do_action in your test:
def test_post(resource, mocker):
m = mocker.patch.object(module_with_do_action, 'do_action')
resource.post()
assert m.call_count == 1
So the actual function will not be called in this test, with the added benefit
that you can check if the post implementation is actuall calling the function.
This requires pytest-mocker to
be installed (shameless plug).
I have a tests file in one of my Pyramid projects. It has one suite with six tests in it:
...
from .scripts import populate_test_data
class FunctionalTests(unittest.TestCase):
def setUp(self):
settings = appconfig('config:testing.ini',
'main',
relative_to='../..')
app = main({}, **settings)
self.testapp = TestApp(app)
self.config = testing.setUp()
engine = engine_from_config(settings)
DBSession.configure(bind=engine)
populate_test_data(engine)
def tearDown(self):
DBSession.remove()
tearDown()
def test_index(self):
...
def test_login_form(self):
...
def test_read_recipe(self):
...
def test_tag(self):
...
def test_dish(self):
...
def test_dashboard_forbidden(self):
...
Now, when I run nosetests templates.py (where templates.py is the mentioned file) I get the following output:
......E
======================================================================
ERROR: templates.populate_test_data
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yentsun/env/local/lib/python2.7/site-packages/nose-1.1.2-py2.7.egg/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/yentsun/env/local/lib/python2.7/site-packages/nose-1.1.2-py2.7.egg/nose/util.py", line 622, in newfunc
return func(*arg, **kw)
TypeError: populate_test_data() takes exactly 1 argument (0 given)
----------------------------------------------------------------------
Ran 7 tests in 1.985s
FAILED (errors=1)
When I run the tests with test suite specified nosetests templates.py:FunctionalTests, the output is, as expected, ok:
......
----------------------------------------------------------------------
Ran 6 tests in 1.980s
OK
Why do I have different output and why is an extra (7th) test run?
UPDATE. Its a bit frustrating, but I when removed the word test from the name populate_test_data (it became populate_dummy_data), everything worked fine.
The problem is solved for now, but maybe somebody knows what went wrong here - why a function from setUp had been tested?
Finding and running tests
nose, by default, follows a few simple rules for test discovery.
If it looks like a test, it’s a test. Names of directories, modules, classes and functions are compared against the testMatch regular
expression, and those that match are considered tests. Any class that
is a unittest.TestCase subclass is also collected, so long as it is
inside of a module that looks like a test.
(from nose 1.3.0 documentation)
In the nose's code, the regexp is defined as r'(?:^|[\b_\.%s-])[Tt]est' % os.sep, and if you inpect nose/selector.py, method Selector.matches(self, name) you'll see that the code uses re.search, which looks for a match anywhere in the string, not only at the beginning, as re.match does.
A small test:
>>> import re
>>> import os
>>> testMatch = r'(?:^|[\b_\.%s-])[Tt]est' % os.sep
>>> re.match(testMatch, 'populate_test_data')
>>> re.search(testMatch, 'populate_test_data')
<_sre.SRE_Match object at 0x7f3512569238>
So populate_test_data indeed "looks like a test" by nose's standards.
I am using Python's unittest with simple code like so:
suite = unittest.TestSuite()
suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(module1))
suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(module2))
However, I am wanting to do some custom things to each test after they have been gathered by the suite. I thought I could do something like this to iterate over the test cases in suite:
print suite.countTestCases()
for test in suite: # Also tried with suite.__iter__()
# Do something with test
print test.__class__
However, for as many test cases as I load, it only ever prints
3
<class 'unittest.suite.TestSuite'>
Is there a way I can get all the objects of class TestCase from the suite? Is there some other way I should be loading test cases to facilitate this?
Try
for test in suite:
print test._tests
I use this function as some of the elements in suite._tests are suites themselves:
def list_of_tests_gen(s):
""" a generator of tests from a suite
"""
for test in s:
if unittest.suite._isnotsuite(test):
yield test
else:
for t in list_of_tests_gen(test):
yield t
A neat way of getting list of tests is to use the nose2 collect plugin.
$ nose2 -s <testdir> -v --plugin nose2.plugins.collect --collect-only
test_1 (test_test.TestClass1)
Test Desc 1 ... ok
test_2 (test_test.TestClass1)
Test Desc 2 ... ok
test_3 (test_test.TestClass1)
Test Desc 3 ... ok
test_2_1 (test_test.TestClass2)
Test Desc 2_1 ... ok
test_2_2 (test_test.TestClass2)
Test Desc 2_2 ... ok
----------------------------------------------------------------------
Ran 5 tests in 0.001s
OK
It doesn't really run the tests.
You can install nose2 (and it's plugins) like this:
$ pip install nose2
And of course you can use nose2 to run unit tests e.g. like this or this:
# run tests from testfile.py
$ nose2 -v -s . testfile
# generate junit xml results:
$ nose2 -v --plugin nose2.plugins.junitxml -X testfile --junit-xml
$ mv nose2-junit.xml results_testfile.xml
Here is an internal helper function that was recently committed to Django that lets one iterate over a test suite's test cases:
from unittest import TestCase
def iter_test_cases(suite, reverse=False):
"""Return an iterator over a test suite's unittest.TestCase objects."""
if reverse:
suite = reversed(tuple(suite))
for test in suite:
if isinstance(test, TestCase):
yield test
else:
# Otherwise, assume it is a test suite.
yield from iter_test_cases(test, reverse=reverse)
This approach is needed as test suites can be nested arbitrarily deeply.
If you want to get the list of test functions (method prefixed by test in unittest.TestCase) in all your test modules:
import unittest
loader = unittest.TestLoader()
suite = loader.discover(YOUR_TEST_PATH)
for test_suite in suite:
for test_case in test_suite:
for test in test_case:
print(f"- {test}")
that gives:
- test_isupper (test_suite1.test_file1.TestA)
- test_upper (test_suite1.test_file1.TestA)
- test_isupper (test_suite2.test_file2.TestB)
- test_upper (test_suite2.test_file2.TestB)
or:
import unittest
loader = unittest.TestLoader()
suite = loader.discover(YOUR_TEST_PATH)
for test_suite in suite:
for test_case in test_suite:
for test in test_case:
print(f"- {test.id()}")
that gives:
test_suite1.test_file1.TestA.test_isupper
test_suite1.test_file1.TestA.test_upper
test_suite2.test_file2.TestB.test_isupper
test_suite2.test_file2.TestB.test_upper