I have different suites of tests.
There are normal tests, there are slow tests, and there are backend tests (and arbitrarily more "special" tests that may need to be skipped unless specified).
I would like to always skip slow and backend tests, unless --run-slow and/or --run-backend CLI options are passed.
If only --run-slow is passed, I will run all normal tests plus "slow" tests (eg those tests marked #pytest.mark.slow). Same thing for --run-backend; and if both CLI options are passed then run all normal tests + slow tests + backend tests.
I followed the pytest Control skipping of tests according to command line option pattern from the docs, but find myself not yet knowledgeable enough of pytest to extend it to having multiple skip CLI options.
Can anyone help me out?
Reference code snippet:
def pytest_addoption(parser):
parser.addoption("--run-slow", action="store_true", default=False, help="run slow tests")
parser.addoption("--run-backend", action="store_true", default=False, help="run backend tests")
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test to run slow")
config.addinivalue_line("markers", "backend: mark test as backend")
def pytest_collection_modifyitems(config, items):
if config.getoption("--run-slow") or config.getoption("--run-backend"): # <-- this line logic is bad!
return
skip = pytest.mark.skip(reason="need '--run-*' option to run")
for item in items:
if "slow" in item.keywords or "backend" in item.keywords:
item.add_marker(skip)
Thank you!
If I understand the question correctly, you are asking for the logic in pytest_collection_modifyitems. You could just change your code to something like:
conftest.py
def pytest_collection_modifyitems(config, items):
run_slow = config.getoption("--run-slow")
run_backend = config.getoption("--run-backend")
skip = pytest.mark.skip(reason="need '--run-*' option to run")
for item in items:
if (not run_slow and "slow" in item.keywords or
not run_backend and "backend" in item.keywords):
item.add_marker(skip)
This would skip all tests marked as slow or backend by default. If adding the command line options --run-slow and/or --run-backend, the respective tests would not be skipped.
Another option (as mentioned in the comments) would be filtering by markers. In this case, instead of adding your own command line options, you could just filter tests by markers, e.g. use pytest -m "not (slow or backend)" to not run tests with these markers, and use pytest -m "not slow" and pytest -m "not backend" if you want to skip tests only for one marker. In this case you don't have to implement pytest_collection_modifyitems, but the default behavior would be to run all tests.
Related
I'm trying to use pytest to test certain AWS Lambdas.
In order to perform certain things, I have to pass in some ID.
I've done it in very hacky way, having this in conftest.py:
def pytest_addoption(parser):
store = ParamsStore()
try:
parames = sys.argv[1:]
print(parames)
store.my_id = parames[1]
store.dont_store = parames[3]
except:
store.existing_stack = ''
parser.addoption("--myid", action="store", default="", help="Pass RunId To reuse it Later")
parser.addoption("--dontstore", action="store", default=False, help="Pass True if you'd like to persist when tests are passing")
I understand it's "against" the pytest logic, but I need these parameters to be stored as a global params.
Now there are two problems I'm trying to solve.
I'd like it not to be hacky as is, where I'm intercepting params by reading arguments.
I've read all the "make fixtures" documentation and suggestion, but I need these to be used here, in conftest.py, before any tests. It's not even "before all" or per "each test" it's practically setting up the whole thing by reading local AWS cdk.out and manipulating cloud assembly (only once) before running tests, and that process is done once, so "per each test" is not needed.
The only way I see it so far, is to have two separate scripts, one regular python code, that is then run before pytest itself, but that becomes pain to handle, so I'm trying to bundle it and do everything just by using pytest.
If I can read args - it would work. If I can read some config file - it would work. If I can bundle pytest to "first execute some python prescript" then run pytest - it might work as well.
I'd like (and I'm not sure if 1 can be resolved better) to just pass a flag "--dontstore" without any value afterwards, and just the presence of the flag would imply something is 'True', while if it's not there it would be 'False'.
Thanks,
Option 1:
How about something like this in conftest.py:
def pytest_addoption(parser):
parser.addoption("--myid", action="store", default="", help="Pass RunId To reuse it Later")
parser.addoption("--dontstore", action="store", default=False, help="Pass True if you'd like to persist when tests are passing")
def pytest_configure(config):
my_id = config.getoption("--myid")
dont_store = config.getoption("--dontstore")
Use these two variables for your setup.
Option 2
Set-up a jenkins job with the python script for the setup to be run before the pytests.
I wish to configure pytest such that it excludes some tests by default; but it should be easy to include them again with some command line option. I only found -k, and I have the impression that that allows complex specifications, but am not sure how to go about my specific need...
The exclusion should be part of the source or a config file (it's permanent - think about very long-running tests which should only be included as conscious choice, certainly never in a build pipeline...).
Bonus question: if that is not possible, how would I use -k to exclude specific tests? Again, I saw hints in the documentation about a way to use not as a keyword, but that doesn't seem to work for me. I.e., -k "not longrunning" gives an error about not being able to find a file "notrunning", but does not exclude anything...
goal: by default skip tests marked as #pytest.mark.integration
conftest.py
import pytest
# function executed right after test items collected but before test run
def pytest_collection_modifyitems(config, items):
if not config.getoption('-m'):
skip_me = pytest.mark.skip(reason="use `-m integration` to run this test")
for item in items:
if "integration" in item.keywords:
item.add_marker(skip_me)
pytest.ini
[pytest]
markers =
integration: integration test that requires environment
now all tests marked as #pytest.mark.integration are skipped unless you use
pytest -m integration
You can use pytest to mark some tests and use -k arg to skip or include them.
For example consider following tests,
import pytest
def test_a():
assert True
#pytest.mark.never_run
def test_b():
assert True
def test_c():
assert True
#pytest.mark.never_run
def test_d():
assert True
you can run pytest like this to run all the tests
pytest
To skip the marked tests you can run pytest like this,
pytest -m "not never_run"
If you want to run the marked tests alone,
pytest -m "never_run"
What I have done in the past is create custom markers for these tests that way I can exclude them using the -m command line flag for running the tests. So for example in your pytest.ini file place the following content:
[pytest]
markers =
longrunning: marks a test as longrunning
Then we just have to mark our long running tests with this marker.
#pytest.mark.longrunning
def test_my_long_test():
time.sleep(100000)
Then when we run the tests we would do pytest -m "not longrunning" tests/ to exclude them and pytest tests to run everything as intended.
I see from here that I can pick out tests based on their mark like so:
pytest -v -m webtest
Let's say I have a test decorated like so:
#pytest.mark.parametrize('platform,configuration', (
pytest.param('win', 'release')
pytest.param('win', 'debug')))
def test_Foo(self):
I'd like to do something like the following:
pytest -v -m parameterize.configuration.release
That way I run test_Foo with the parameter of 'release' but not the parameter of 'debug'. Is there a way to do this? I think I can do this by writing a wrapper test and then passing only the desired parameter down, but I want to avoid doing so because we already have a large number of tests parameterized as describe, and I want to avoid writing a large number of wrapper tests.
You can use -k for expression-based filtering:
$ pytest -k win-release
will run only tests containing win-release in their names. You can list all names without executing the tests by issuing
$ pytest --collect-only -q
Should an expression be not enough, you can always extend pytest by adding your custom filtering logic, for example by passing parameter name and value via command line args and selecting only tests that are parametrized accordingly:
# conftest.py
def pytest_addoption(parser):
parser.addoption('--param-name', action='store', help='parameter name')
parser.addoption('--param-value', action='store', help='parameter value')
def pytest_collection_modifyitems(session, config, items):
param_name = config.getoption('--param-name')
param_value = config.getoption('--param-value')
if param_name and param_value:
items[:] = [item for item in items
if hasattr(item, 'callspec')
and param_name in item.callspec.params
and item.callspec.params[param_name] == param_value]
Now you can e.g. call
$ pytest --param-name=platform --param-value=win
and only tests parametrized with platform=win will be executed.
An alternative to the official answer by hoefling is to create a special marker with pytest-pilot and to apply it:
conftest.py:
from pytest_pilot import EasyMarker
mymark = EasyMarker('mymark', has_arg=False, mode='hard_filter')
test_so.py:
import pytest
from .conftest import mymark
#pytest.mark.parametrize('platform,configuration', (
mymark.param('win', 'release'),
pytest.param('win', 'debug')
))
def test_foo(platform, configuration):
pass
You can now run pytest --mymark, it correctly runs only the test with the mark
test_so\test_so.py::test_foo[win-release] PASSED [ 50%]
test_so\test_so.py::test_foo[win-debug] SKIPPED [100%]
Of course it might not be relevant in all cases, as it requires code modification ; however for advanced filtering patterns, or if the filtering is here to stay and you wish to have some CLI shortcuts to perform it, it might be interesting. Note: I'm the author of this lib ;)
I have some unit tests, but I'm looking for a way to tag some specific unit tests to have them skipped unless you declare an option when you call the tests.
Example:
If I call pytest test_reports.py, I'd want a couple specific unit tests to not be run.
But if I call pytest -<something> test_reports, then I want all my tests to be run.
I looked into the #pytest.mark.skipif(condition) tag but couldn't quite figure it out, so not sure if I'm on the right track or not. Any guidance here would be great!
The pytest documentation offers a nice example on how to skip tests marked "slow" by default and only run them with a --runslow option:
# conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test as slow to run")
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now mark our tests in the following way:
# test_module.py
from time import sleep
import pytest
def test_func_fast():
sleep(0.1)
#pytest.mark.slow
def test_func_slow():
sleep(10)
The test test_func_fast is always executed (calling e.g. pytest). The "slow" function test_func_slow, however, will only be executed when calling pytest --runslow.
We are using markers with addoption in conftest.py
testcase:
#pytest.mark.no_cmd
def test_skip_if_no_command_line():
assert True
conftest.py:
in function
def pytest_addoption(parser):
parser.addoption("--no_cmd", action="store_true",
help="run the tests only in case of that command line (marked with marker #no_cmd)")
in function
def pytest_runtest_setup(item):
if 'no_cmd' in item.keywords and not item.config.getoption("--no_cmd"):
pytest.skip("need --no_cmd option to run this test")
pytest call:
py.test test_the_marker
-> test will be skipped
py.test test_the_marker --no_cmd
-> test will run
There are two ways to do that:
First method is to tag the functions with #pytest.mark decorator and run / skip the tagged functions alone using -m option.
#pytest.mark.anytag
def test_calc_add():
assert True
#pytest.mark.anytag
def test_calc_multiply():
assert True
def test_calc_divide():
assert True
Running the script as py.test -m anytag test_script.py will run only the first two functions.
Alternatively run as py.test -m "not anytag" test_script.py will run only the third function and skip the first two functions.
Here 'anytag' is the name of the tag. It can be anything.!
Second way is to run the functions with a common substring in their name using -k option.
def test_calc_add():
assert True
def test_calc_multiply():
assert True
def test_divide():
assert True
Running the script as py.test -k calc test_script.py will run the functions and skip the last one.
Note that 'calc' is the common substring present in both the function name and any other function having 'calc' in its name like 'calculate' will also be run.
Following the approach suggested in the pytest docs, thus the answer of #Manu_CJ, is certainly the way to go here.
I'd simply like to show how this can be adapted to easily define multiple options:
The canonical example given by the pytest docs highlights well how to add a single marker through command line options. However, adapting it to add multiple markers might not be straight forward, as the three hooks pytest_addoption, pytest_configure and pytest_collection_modifyitems all need to be evoked to allow adding a single marker through command line option.
This is one way you can adapt the canonical example, if you have several markers, like 'flag1', 'flag2', etc., that you want to be able to add via command line option:
# content of conftest.py
import pytest
# Create a dict of markers.
# The key is used as option, so --{key} will run all tests marked with key.
# The value must be a dict that specifies:
# 1. 'help': the command line help text
# 2. 'marker-descr': a description of the marker
# 3. 'skip-reason': displayed reason whenever a test with this marker is skipped.
optional_markers = {
"flag1": {"help": "<Command line help text for flag1...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
"flag2": {"help": "<Command line help text for flag2...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
# add further markers here
}
def pytest_addoption(parser):
for marker, info in optional_markers.items():
parser.addoption("--{}".format(marker), action="store_true",
default=False, help=info['help'])
def pytest_configure(config):
for marker, info in optional_markers.items():
config.addinivalue_line("markers",
"{}: {}".format(marker, info['marker-descr']))
def pytest_collection_modifyitems(config, items):
for marker, info in optional_markers.items():
if not config.getoption("--{}".format(marker)):
skip_test = pytest.mark.skip(
reason=info['skip-reason'].format(marker)
)
for item in items:
if marker in item.keywords:
item.add_marker(skip_test)
Now you can use the markers defined in optional_markers in your test modules:
# content of test_module.py
import pytest
#pytest.mark.flag1
def test_some_func():
pass
#pytest.mark.flag2
def test_other_func():
pass
If the use-case prohibits modifying either conftest.py and/or pytest.ini, here's how to use environment variables to directly take advantage of the skipif marker.
test_reports.py contents:
import os
import pytest
#pytest.mark.skipif(
not os.environ.get("MY_SPECIAL_FLAG"),
reason="MY_SPECIAL_FLAG not set in environment"
)
def test_skip_if_no_cli_tag():
assert True
def test_always_run():
assert True
In Windows:
> pytest -v test_reports.py --no-header
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============== 1 passed, 1 skipped in 0.01s ==============
> cmd /c "set MY_SPECIAL_FLAG=1&pytest -v test_reports.py --no-header"
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
=================== 2 passed in 0.01s ====================
In Linux or other *NIX-like systems:
$ pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============ 1 passed, 1 skipped in 0.00s =============
$ MY_SPECIAL_FLAG=1 pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
================== 2 passed in 0.00s ==================
MY_SPECIAL_FLAG can be whatever you wish based on your specific use-case and of course --no-header is just being used for this example.
Enjoy.
I've technically already solved the problem I was working on, but I can't help but feel like my solution is ugly:
I've got a pytest suite that I can run in two modes: Local Mode (for developing tests; everything just runs on my dev box through Chrome), and Seriousface Regression Testing Mode (for CI; the suite gets run on a zillion browsers and OSes). I've got a command-line flag to toggle between the two modes, --test-local. If it's there, I run in local mode. If it's not there, I run in seriousface mode. Here's how I do it:
# contents of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--test-local", action="store_true", default=False, help="run locally instead of in seriousface mode")
def pytest_generate_tests(metafunc):
if "dummy" in metafunc.fixturenames:
if metafunc.config.getoption("--test-local"):
driverParams = [(True, None)]
else:
driverParams = [(False, "seriousface setting 1"), (False, "seriousface setting 2")]
metafunc.parameterize("dummy", driverParams)
#pytest.fixture(scope="function")
def driver(dummy):
_driver = makeDriverStuff(dummy[0], dummy[1])
yield _driver
_driver.cleanup()
#pytest.fixture
def dummy():
pass
The problem is, that dummy fixture is hideous. I've tried having pytest_generate_tests parameterize the driver fixture directly, but it ends up replacing the fixture rather than just feeding stuff into it, so cleanup() never gets called when the test finishes. Using the dummy lets me replace the dummy with my parameter tuple, so that that gets passed into driver().
But, to reiterate, what I have does work, it just feels like a janky hack.
You can try a different approach: instead of dynamically selecting the parameter set for the test, declare ALL parameter sets on it, but deselect the irrelevant ones at launch time.
# r.py
import pytest
real = pytest.mark.real
mock = pytest.mark.mock
#pytest.mark.parametrize('a, b', [
real((True, 'serious 1')),
real((True, 'serious 2')),
mock((False, 'fancy mock')),
(None, 'always unmarked!'),
])
def test_me(a, b):
print([a, b])
Then run it as follows:
pytest -ra -v -s r.py -m real # strictly marked sets
pytest -ra -v -s r.py -m mock # strictly marked sets
pytest -ra -v -s r.py -m 'not real' # incl. non-marked sets
pytest -ra -v -s r.py -m 'not mock' # incl. non-marked sets
Also, skipif marks can be used for the selected parameter set (different from deselecting). They are natively supported by pytest. But the syntax is quite ugly. See more in the pytest's Skip/xfail with parametrize section.
The official manual also contains exactly the same case as in your question in Custom marker and command line option to control test runs. However, it is also not as elegant as -m test deselection, and is more suitable for complex run-time conditions, not on the apriori known structure of the tests.