Exclude some tests by default - python

I wish to configure pytest such that it excludes some tests by default; but it should be easy to include them again with some command line option. I only found -k, and I have the impression that that allows complex specifications, but am not sure how to go about my specific need...
The exclusion should be part of the source or a config file (it's permanent - think about very long-running tests which should only be included as conscious choice, certainly never in a build pipeline...).
Bonus question: if that is not possible, how would I use -k to exclude specific tests? Again, I saw hints in the documentation about a way to use not as a keyword, but that doesn't seem to work for me. I.e., -k "not longrunning" gives an error about not being able to find a file "notrunning", but does not exclude anything...

goal: by default skip tests marked as #pytest.mark.integration
conftest.py
import pytest
# function executed right after test items collected but before test run
def pytest_collection_modifyitems(config, items):
if not config.getoption('-m'):
skip_me = pytest.mark.skip(reason="use `-m integration` to run this test")
for item in items:
if "integration" in item.keywords:
item.add_marker(skip_me)
pytest.ini
[pytest]
markers =
integration: integration test that requires environment
now all tests marked as #pytest.mark.integration are skipped unless you use
pytest -m integration

You can use pytest to mark some tests and use -k arg to skip or include them.
For example consider following tests,
import pytest
def test_a():
assert True
#pytest.mark.never_run
def test_b():
assert True
def test_c():
assert True
#pytest.mark.never_run
def test_d():
assert True
you can run pytest like this to run all the tests
pytest
To skip the marked tests you can run pytest like this,
pytest -m "not never_run"
If you want to run the marked tests alone,
pytest -m "never_run"

What I have done in the past is create custom markers for these tests that way I can exclude them using the -m command line flag for running the tests. So for example in your pytest.ini file place the following content:
[pytest]
markers =
longrunning: marks a test as longrunning
Then we just have to mark our long running tests with this marker.
#pytest.mark.longrunning
def test_my_long_test():
time.sleep(100000)
Then when we run the tests we would do pytest -m "not longrunning" tests/ to exclude them and pytest tests to run everything as intended.

Related

Select suite of tests for coverage with pytest

I'm running a set of tests with pytest and want also to measure its coverage with the coverage package.
I have some decorators in my code, dividing the tests in several suites using something like:
#pytest.mark.firstgroup
def test_myfirsttest():
assert True is True
#pytest.mark.secondgroup
def test_mysecondtest():
assert False is False
I need to find something like:
coverage run --suite=firstgroup -m pytest
So that in this case only the first test, since it's in the correct suite, will be used to test coverage.
The suites work correctly when I use directly pytest but I don't know how to use them in coverage
Is there any way to configure coverage to achieve this?
You can pass whatever arguments you want to pytest. If this line works:
pytest --something --also=another 1 2 foo bar
then you can run it under coverage like this:
coverage run -m pytest --something --also=another 1 2 foo bar

Pytest select tests based on mark.parameterize value?

I see from here that I can pick out tests based on their mark like so:
pytest -v -m webtest
Let's say I have a test decorated like so:
#pytest.mark.parametrize('platform,configuration', (
pytest.param('win', 'release')
pytest.param('win', 'debug')))
def test_Foo(self):
I'd like to do something like the following:
pytest -v -m parameterize.configuration.release
That way I run test_Foo with the parameter of 'release' but not the parameter of 'debug'. Is there a way to do this? I think I can do this by writing a wrapper test and then passing only the desired parameter down, but I want to avoid doing so because we already have a large number of tests parameterized as describe, and I want to avoid writing a large number of wrapper tests.
You can use -k for expression-based filtering:
$ pytest -k win-release
will run only tests containing win-release in their names. You can list all names without executing the tests by issuing
$ pytest --collect-only -q
Should an expression be not enough, you can always extend pytest by adding your custom filtering logic, for example by passing parameter name and value via command line args and selecting only tests that are parametrized accordingly:
# conftest.py
def pytest_addoption(parser):
parser.addoption('--param-name', action='store', help='parameter name')
parser.addoption('--param-value', action='store', help='parameter value')
def pytest_collection_modifyitems(session, config, items):
param_name = config.getoption('--param-name')
param_value = config.getoption('--param-value')
if param_name and param_value:
items[:] = [item for item in items
if hasattr(item, 'callspec')
and param_name in item.callspec.params
and item.callspec.params[param_name] == param_value]
Now you can e.g. call
$ pytest --param-name=platform --param-value=win
and only tests parametrized with platform=win will be executed.
An alternative to the official answer by hoefling is to create a special marker with pytest-pilot and to apply it:
conftest.py:
from pytest_pilot import EasyMarker
mymark = EasyMarker('mymark', has_arg=False, mode='hard_filter')
test_so.py:
import pytest
from .conftest import mymark
#pytest.mark.parametrize('platform,configuration', (
mymark.param('win', 'release'),
pytest.param('win', 'debug')
))
def test_foo(platform, configuration):
pass
You can now run pytest --mymark, it correctly runs only the test with the mark
test_so\test_so.py::test_foo[win-release] PASSED [ 50%]
test_so\test_so.py::test_foo[win-debug] SKIPPED [100%]
Of course it might not be relevant in all cases, as it requires code modification ; however for advanced filtering patterns, or if the filtering is here to stay and you wish to have some CLI shortcuts to perform it, it might be interesting. Note: I'm the author of this lib ;)

Cleaner way to do pytest fixture parameterization based on command-line switch?

I've technically already solved the problem I was working on, but I can't help but feel like my solution is ugly:
I've got a pytest suite that I can run in two modes: Local Mode (for developing tests; everything just runs on my dev box through Chrome), and Seriousface Regression Testing Mode (for CI; the suite gets run on a zillion browsers and OSes). I've got a command-line flag to toggle between the two modes, --test-local. If it's there, I run in local mode. If it's not there, I run in seriousface mode. Here's how I do it:
# contents of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--test-local", action="store_true", default=False, help="run locally instead of in seriousface mode")
def pytest_generate_tests(metafunc):
if "dummy" in metafunc.fixturenames:
if metafunc.config.getoption("--test-local"):
driverParams = [(True, None)]
else:
driverParams = [(False, "seriousface setting 1"), (False, "seriousface setting 2")]
metafunc.parameterize("dummy", driverParams)
#pytest.fixture(scope="function")
def driver(dummy):
_driver = makeDriverStuff(dummy[0], dummy[1])
yield _driver
_driver.cleanup()
#pytest.fixture
def dummy():
pass
The problem is, that dummy fixture is hideous. I've tried having pytest_generate_tests parameterize the driver fixture directly, but it ends up replacing the fixture rather than just feeding stuff into it, so cleanup() never gets called when the test finishes. Using the dummy lets me replace the dummy with my parameter tuple, so that that gets passed into driver().
But, to reiterate, what I have does work, it just feels like a janky hack.
You can try a different approach: instead of dynamically selecting the parameter set for the test, declare ALL parameter sets on it, but deselect the irrelevant ones at launch time.
# r.py
import pytest
real = pytest.mark.real
mock = pytest.mark.mock
#pytest.mark.parametrize('a, b', [
real((True, 'serious 1')),
real((True, 'serious 2')),
mock((False, 'fancy mock')),
(None, 'always unmarked!'),
])
def test_me(a, b):
print([a, b])
Then run it as follows:
pytest -ra -v -s r.py -m real # strictly marked sets
pytest -ra -v -s r.py -m mock # strictly marked sets
pytest -ra -v -s r.py -m 'not real' # incl. non-marked sets
pytest -ra -v -s r.py -m 'not mock' # incl. non-marked sets
Also, skipif marks can be used for the selected parameter set (different from deselecting). They are natively supported by pytest. But the syntax is quite ugly. See more in the pytest's Skip/xfail with parametrize section.
The official manual also contains exactly the same case as in your question in Custom marker and command line option to control test runs. However, it is also not as elegant as -m test deselection, and is more suitable for complex run-time conditions, not on the apriori known structure of the tests.

In pytest, how to skip all tests that use specified fixture?

Suppose I have a fixture fixture1 and I want to be able to skip all tests that use this fixture from pytest commandline.
I found answer to this quetion, but I decided to do it different way.
I am marking all tests that uses fixture1 with using_fixture1 marker by defining following in my conftest.py
def pytest_collection_modifyitems(items):
for item in items:
try:
fixtures=item.fixturenames
if 'fixture1' in fixtures:
item.add_marker(pytest.mark.using_fixture1)
except:
pass
And then, when running pytest I use: py.test tests -m "not using_fixture1".
So the real question should be: Are there any better ways to do it and are there any drawbacks of my solution?

Running single test function in Nose NOT associated with unittest subclass

nose discovers tests beginning with test_, as well as subclasses of unittest.TestCase.
If one wishes to run a single TestCase test, e.g.:
# file tests.py
class T(unittest.TestCase):
def test_something():
1/0
This can be done on the command line with:
nosetests tests:T.test_something
I sometimes prefer to write a simple function and skip all the unittest boilerplate:
def test_something_else():
assert False
In that case, the test will still be run by nose when it runs all my tests. But how can I tell nose to run only that test, using the (Unix) command line?
That would be:
nosetests tests:test_something_else
An additional tip is to use attributes
from nose.plugins.attrib import attr
#attr('now')
def test_something_else():
pass
To run all tests tagged with the attribute, execute:
nosetests -a now
Inversely, avoid running those tests:
nosetests -a !now

Categories

Resources