Avoid setUpClass to run every time for nose cherry picked tests - python

This is my tests class, in mymodule.foo:
class Some TestClass(TestCase):
def setUpClass(cls):
# Do the setup for my tests
def test_Something(self)
# Test something
def test_AnotherThing(self)
# Test another thing
def test_DifferentStuff(self)
# Test another thing
I'm running the tests from Python with the following lines:
tests_to_run = ['mymodule.foo:test_AnotherThing', 'mymodule.foo:test_DifferentStuff']
result = nose.run(defaultTest= tests_to_run)
(This is obviously a bit more complicated and there's some logic to pick what tests I want to run)
Nose will run just the selected tests, as expected, but the setUpClass will run once for every test in tests_to_run. Is there any way to avoid this?
What I'm trying to achieve is to be able to run some dynamic set of tests while using nose in a Python script (not from the command line)

As #jonrsharpe mentioned, setupModule is what I was after: it will run just once per the whole module where my tests reside.

Related

How to run test case marked `#unittest.skip` in Python?

Assuming the following test suite:
# test_module.py
import unittest
class Tests(unittest.TestCase):
#unittest.skip
def test_1(self):
print("This should run only if explicitly asked to but not by default")
# assume many other test cases and methods with or without the skip marker
When invoking the unittest library via python -m unittest are there any arguments I can pass to it actually run and not skip Tests.test_1 without modifying the test code and running any other skipped tests?
python -m unittest test_module.Tests.test_1 correctly selects this as the only test to run, but it still skips it.
If there is no way to do it without modifying the test code, what is the most idiomatic change I can make to conditionally undo the #unittest.skip and run one specific test case test case?
In all cases, I still want python -m unittest discover (or any other invocation that doesn't explicitly turn on the test) to skip the test.
If you want to skip some expensive tests you can use a conditional skip together with a custom environment variable:
#skipIf(int(os.getenv('TEST_LEVEL', 0)) < 1)
def expensive_test(self):
...
Then you can include this test by specifying the corresponding environment variable:
TEST_LEVEL=1 python -m unittest discover
TEST_LEVEL=1 python -m unittest test_module.Tests.test_1
If you want to skip a test because you expect it to fail, you can use the dedicated expectedFailure decorator.
By the way, pytest has a dedicated decorator for marking slow tests.

Patching the same module for every test

If I have multiple tests that patch the same module, is there a way to not have to patch it in every test (namely, factor it out)?
def test_1(mocker):
mocker.patch.object(module, 'method')
# run test
def test_2(mocker):
mocker.patch.object(module, 'method')
# run test
def test_3(mocker):
mocker.patch.object(module, 'method')
# run test
yes there is, take a look at autouse fixtures in the official pytest docs

Python unittest, running only the tests that failed

I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?
(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something
Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.
Just run the test with --last-failed option (you might need pytest)

Can tests with pytest fixtures be run interactively?

I have some tests written using pytest and fixtures, e.g.:
class TestThing:
#pytest.fixture()
def temp_dir(self, request):
my_temp_dir = tempfile.mkdtemp()
def fin():
shutil.rmtree(my_temp_dir)
request.addfinalizer(fin)
return my_temp_dir
def test_something(self, temp_dir)
with open(os.path.join(temp_dir, 'test.txt'), 'w') as f:
f.write('test')
This works fine when the tests are invoked from the shell, e.g.
$ py.test
but I don't know how to run them from within a python/ipython session; trying e.g.
tt = TestThing()
tt.test_something(tt.temp_dir())
fails because temp_dir requires a request object to be passed on. So, how does one invoke a fixture with a request object injected?
Yes. You don't have to manually assemble any test fixtures or anything like that. Everything runs just like calling pytest in the project directory.
Method1:
This is the best method because it gives you access to the debugger if your test fails
In ipython shell use:
**ipython**> run -m pytest prj/
This will run all your tests in the prj/tests directory.
This will give you access to the debugger, or allow you to set breakpoints if you have a
import ipdb; ipdb.set_trace() in your program (https://docs.pytest.org/en/latest/usage.html#setting-breakpoints).
Method2:
Use !pytest while in the test directory. This wont give you access to the debugger. However, if you use
**ipython**> !pytest --pdb
If you have a test failure, it will drop you into the debugger (subshell), so you can run your post-mortem analysis (https://docs.pytest.org/en/latest/usage.html#dropping-to-pdb-python-debugger-on-failures)
Using these methods you can even run individual modules/test_fuctions/TestClasses in ipython using (https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)
**ipython**> run -m pytest prj/tests/test_module1.py::TestClass1::test_function1
You can bypass the pytest.fixture decorator and directly call the wrapped test function.
tmp = tt.temp_dir.__pytest_wrapped__.obj(request=...)
Accessing internals, it's bad, but when you need it...
The best method I have which is far from ideal is to just %run the test file, then manually assemble the fixtures, then simply call the tests. The problem with this is tracking down the modules where the default fixtures are defined, and then calling them in their order of dependencies.
you can use two cells for this:
first:
def test_something():
assert True
second:
from tempfile import mktemp
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(_i) # write last cell input
!pytest $test_file
also u can do this on one cell like this but you won't have code highlighting:
from tempfile import mktemp
test_code = """
def test_something():
assert True
"""
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(test_code)
!pytest $test_file
The simple answer is that you don't want to run py.test interactively from python. Most people set up some integration with their text editor or IDE to run py.test and parse it's output. But really it's a command line tool and that is how it should be used.
As a sidenode you may want to check out the built-in tmpdir fixture: http://pytest.org/latest/tmpdir.html Because it seems like you're re-inventing this.

running a test suite (an arbitrary collection of tests) with py.test

I'm using py.test to build functional test framework, so I need to be able to specify the exact tests to be run. I understand the beauty of dynamic test collection, but I want to be able to run my test environment health checks first, then run my regression tests after; that categorization does not preclude tests in these sets being used for other purposes.
The test suites will be tied to Jenkins build projects. I'm using osx, python 2.7.3, py.test 2.3.4.
So I have a test case like the following:
# sample_unittest.py
import unittest, pytest
class TestClass(unittest.TestCase):
def setUp(self):
self.testdata = ['apple', 'pear', 'berry']
def test_first(self):
assert 'apple' in self.testdata
def test_second(self):
assert 'pear' in self.testdata
def tearDown(self):
self.testdata = []
def suite():
suite = unittest.TestSuite()
suite.addTest(TestClass('test_first'))
return suite
if __name__ == '__main__':
unittest.TextTestRunner(verbosity=2).run(suite())
And I have a test suite like this:
# suite_regression.py
import unittest, pytest
import functionaltests.sample_unittest as sample_unittest
# set up the imported tests
suite_sample_unittest = sample_unittest.suite()
# create this test suite
suite = unittest.TestSuite()
suite.addTest(suite_sample_unittest)
# run the suite
unittest.TextTestRunner(verbosity=2).run(suite)
If I run the following from the command line against the suite, test_first runs (but I don't get the additional information that py.test would provide):
python functionaltests/suite_regression.py -v
If I run the following against the suite, 0 tests are collected:
py.test functionaltests/suite_regression.py
If I run the following against the testcase, test_first and test_second run:
py.test functionaltests/sample_unittest.py -v
I don't see how doing py.test with keywords will help organize tests into suites. Placing testcases into a folder structure and running py.test with folder options won't let me organize tests by functional area.
So my questions:
Is there a py.test mechanism for specifying arbitrary groupings of tests in a re-usable format?
Is there a way to use unittest.TestSuite from py.test?
EDIT:
So I tried out py.test markers, which lets me flag test functions and test methods with an arbitrary label, and then filter for that label at run time.
# conftest.py
import pytest
# set up custom markers
regression = pytest.mark.NAME
health = pytest.mark.NAME
And my updated test case:
# sample_unittest.py
import unittest, pytest
class TestClass(unittest.TestCase):
def setUp(self):
self.testdata = ['apple', 'pear', 'berry']
#pytest.mark.healthcheck
#pytest.mark.regression
def test_first(self):
assert 'apple' in self.testdata
#pytest.mark.regression
def test_second(self):
assert 'pear' in self.testdata
def tearDown(self):
self.testdata = []
def suite():
suite = unittest.TestSuite()
suite.addTest(TestClass('test_first'))
return suite
if __name__ == '__main__':
unittest.TextTestRunner(verbosity=2).run(suite())
So running the following command collects and runs test_first:
py.test functionaltests/sample_unittest.py -v -m healthcheck
And this collects and runs test_first and test_second:
py.test functionaltests/sample_unittest.py -v -m regression
So back to my questions: markers is a partial solution, but I still don't have a way to control the execution of collected marked tests.
No need to use markers in this case: setting the #pytest.mark.incremental on your py.test test class will force the execution order to the declaration order:
# sequential.py
import pytest
#pytest.mark.incremental
class TestSequential:
def test_first(self):
print('first')
def test_second(self):
print('second')
def test_third(self):
print('third')
Now running it with
pytest -s -v sequential.py
produces the following output:
=========== test session starts ===========
collected 3 items
sequential.py::TestSequential::test_first first
PASSED
sequential.py::TestSequential::test_second second
PASSED
sequential.py::TestSequential::test_third third
PASSED
=========== 3 passed in 0.01 seconds ===========
I guess it's a bit late now but I just finished up an interactive selection plugin with docs here:
https://github.com/tgoodlet/pytest-interactive
I actually use the hook Holger mentioned above.
It allows you to choose a selection of tests just after the collection phase using IPython. Ordering the tests is pretty easy using slices, subscripts, or tab-completion if that's what you're after. Note that it's an interactive tool meant for use during development and not so much for automated regression runs.
For persistent ordering using marks I've used pytest-ordering which is actually quite useful especially if you have baseline prerequisite tests in a long regression suite.
There is currently no direct way to control the order of test execution. FWIW, there is a plugin hook pytest_collection_modifyitems which you can use to implement something. See https://github.com/klrmn/pytest-random/blob/master/random_plugin.py for a plugin that uses it to implement randomization.
I know this is old but this library seems like it allow exactly what the op was looking for.. may help someone in the future.
https://pytest-ordering.readthedocs.io/en/develop/

Categories

Resources