I'm using #pytest.mark to uniquely identify specific tests, therefore I created my custom marker.
#pytest.mark.key
I'm using it as such:
#pytest.mark.key("test-001")
def test_simple(self):
self.passing_step()
self.passing_step()
self.passing_step()
self.passing_step()
assert True
Now from the console I would like to run all tests with the marked key "test-001". How can I achieve this?
What i'm looking for is something like this:
pypi.org/project/pytest-jira/0.3.6
where a test can be mapped to a Jira key. I looked at the source code for the link but i'm unsure how to achieve it in order for me to run specific tests. Say I only wanna run the test with the key "test-001".
Pytest does not provide this out of the box. You can filter by marker names using the -m option, but not by marker attributes.
You can add your own option to filter by keys, however. Here is an example:
conftest.py
def pytest_configure(config):
# register your new marker to avoid warnings
config.addinivalue_line(
"markers",
"key: specify a test key"
)
def pytest_addoption(parser):
# add your new filter option (you can name it whatever you want)
parser.addoption('--key', action='store')
def pytest_collection_modifyitems(config, items):
# check if you got an option like --key=test-001
filter = config.getoption("--key")
if filter:
new_items = []
for item in items:
mark = item.get_closest_marker("key")
if mark and mark.args and mark.args[0] == filter:
# collect all items that have a key marker with that value
new_items.append(item)
items[:] = new_items
Now you run something like
pytest --key=test-001
to only run the tests with that marker attribute.
Note that this will still show the overall number of tests as collected, but run only the filtered ones. Here is an example:
test_key.py
import pytest
#pytest.mark.key("test-001")
def test_simple1():
pass
#pytest.mark.key("test-002")
def test_simple2():
pass
#pytest.mark.key("test-001")
def test_simple3():
pass
def test_simple4():
pass
$ python -m pytest -v --key=test-001 test_key.py
...
collected 4 items
test_key.py::test_simple1 PASSED
test_key.py::test_simple3 PASSED
================================================== 2 passed in 0.26s ==================================================
you can run pytest with -m option c
check below command:
pytest -m 'test-001' <your test file>
Related
I'm trying to collect some stats of how many test cases there are per feature or team, and want to do this only during collection, not when everyone in my company runs their tests. I'm looking for a way to run a function only if --collect-only was used in the py.test command.
Goal:
Run some code only if --collect-only was used
I need access to the
test items data structure (i.e. I need all of the markers that were
used)
Currently, I am doing this via pytest_collection_modifyitems hook:
def pytest_collection_modifyitems(config, items):
# This hook runs after collection
for item in items:
teams = [mark.name for mark in item.iter_markers() if mark.name.startswith('team_')]
features = [mark.name for mark in item.iter_markers() if mark.name.startswith('feature_')]
# do something with these markers
Is there a way to run this code above if and only if py.test was run with --collect-only? If anyone has suggestions on better ways to do this, please help!
Thanks!
Just check whether the flag is set in the config object. Example with your hookimpl:
def pytest_collection_modifyitems(config, items):
if config.option.collectonly:
print("I will run only with --collectonly flag")
else:
print("I will run only without --collectonly flag")
I see from here that I can pick out tests based on their mark like so:
pytest -v -m webtest
Let's say I have a test decorated like so:
#pytest.mark.parametrize('platform,configuration', (
pytest.param('win', 'release')
pytest.param('win', 'debug')))
def test_Foo(self):
I'd like to do something like the following:
pytest -v -m parameterize.configuration.release
That way I run test_Foo with the parameter of 'release' but not the parameter of 'debug'. Is there a way to do this? I think I can do this by writing a wrapper test and then passing only the desired parameter down, but I want to avoid doing so because we already have a large number of tests parameterized as describe, and I want to avoid writing a large number of wrapper tests.
You can use -k for expression-based filtering:
$ pytest -k win-release
will run only tests containing win-release in their names. You can list all names without executing the tests by issuing
$ pytest --collect-only -q
Should an expression be not enough, you can always extend pytest by adding your custom filtering logic, for example by passing parameter name and value via command line args and selecting only tests that are parametrized accordingly:
# conftest.py
def pytest_addoption(parser):
parser.addoption('--param-name', action='store', help='parameter name')
parser.addoption('--param-value', action='store', help='parameter value')
def pytest_collection_modifyitems(session, config, items):
param_name = config.getoption('--param-name')
param_value = config.getoption('--param-value')
if param_name and param_value:
items[:] = [item for item in items
if hasattr(item, 'callspec')
and param_name in item.callspec.params
and item.callspec.params[param_name] == param_value]
Now you can e.g. call
$ pytest --param-name=platform --param-value=win
and only tests parametrized with platform=win will be executed.
An alternative to the official answer by hoefling is to create a special marker with pytest-pilot and to apply it:
conftest.py:
from pytest_pilot import EasyMarker
mymark = EasyMarker('mymark', has_arg=False, mode='hard_filter')
test_so.py:
import pytest
from .conftest import mymark
#pytest.mark.parametrize('platform,configuration', (
mymark.param('win', 'release'),
pytest.param('win', 'debug')
))
def test_foo(platform, configuration):
pass
You can now run pytest --mymark, it correctly runs only the test with the mark
test_so\test_so.py::test_foo[win-release] PASSED [ 50%]
test_so\test_so.py::test_foo[win-debug] SKIPPED [100%]
Of course it might not be relevant in all cases, as it requires code modification ; however for advanced filtering patterns, or if the filtering is here to stay and you wish to have some CLI shortcuts to perform it, it might be interesting. Note: I'm the author of this lib ;)
I have some unit tests, but I'm looking for a way to tag some specific unit tests to have them skipped unless you declare an option when you call the tests.
Example:
If I call pytest test_reports.py, I'd want a couple specific unit tests to not be run.
But if I call pytest -<something> test_reports, then I want all my tests to be run.
I looked into the #pytest.mark.skipif(condition) tag but couldn't quite figure it out, so not sure if I'm on the right track or not. Any guidance here would be great!
The pytest documentation offers a nice example on how to skip tests marked "slow" by default and only run them with a --runslow option:
# conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test as slow to run")
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now mark our tests in the following way:
# test_module.py
from time import sleep
import pytest
def test_func_fast():
sleep(0.1)
#pytest.mark.slow
def test_func_slow():
sleep(10)
The test test_func_fast is always executed (calling e.g. pytest). The "slow" function test_func_slow, however, will only be executed when calling pytest --runslow.
We are using markers with addoption in conftest.py
testcase:
#pytest.mark.no_cmd
def test_skip_if_no_command_line():
assert True
conftest.py:
in function
def pytest_addoption(parser):
parser.addoption("--no_cmd", action="store_true",
help="run the tests only in case of that command line (marked with marker #no_cmd)")
in function
def pytest_runtest_setup(item):
if 'no_cmd' in item.keywords and not item.config.getoption("--no_cmd"):
pytest.skip("need --no_cmd option to run this test")
pytest call:
py.test test_the_marker
-> test will be skipped
py.test test_the_marker --no_cmd
-> test will run
There are two ways to do that:
First method is to tag the functions with #pytest.mark decorator and run / skip the tagged functions alone using -m option.
#pytest.mark.anytag
def test_calc_add():
assert True
#pytest.mark.anytag
def test_calc_multiply():
assert True
def test_calc_divide():
assert True
Running the script as py.test -m anytag test_script.py will run only the first two functions.
Alternatively run as py.test -m "not anytag" test_script.py will run only the third function and skip the first two functions.
Here 'anytag' is the name of the tag. It can be anything.!
Second way is to run the functions with a common substring in their name using -k option.
def test_calc_add():
assert True
def test_calc_multiply():
assert True
def test_divide():
assert True
Running the script as py.test -k calc test_script.py will run the functions and skip the last one.
Note that 'calc' is the common substring present in both the function name and any other function having 'calc' in its name like 'calculate' will also be run.
Following the approach suggested in the pytest docs, thus the answer of #Manu_CJ, is certainly the way to go here.
I'd simply like to show how this can be adapted to easily define multiple options:
The canonical example given by the pytest docs highlights well how to add a single marker through command line options. However, adapting it to add multiple markers might not be straight forward, as the three hooks pytest_addoption, pytest_configure and pytest_collection_modifyitems all need to be evoked to allow adding a single marker through command line option.
This is one way you can adapt the canonical example, if you have several markers, like 'flag1', 'flag2', etc., that you want to be able to add via command line option:
# content of conftest.py
import pytest
# Create a dict of markers.
# The key is used as option, so --{key} will run all tests marked with key.
# The value must be a dict that specifies:
# 1. 'help': the command line help text
# 2. 'marker-descr': a description of the marker
# 3. 'skip-reason': displayed reason whenever a test with this marker is skipped.
optional_markers = {
"flag1": {"help": "<Command line help text for flag1...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
"flag2": {"help": "<Command line help text for flag2...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
# add further markers here
}
def pytest_addoption(parser):
for marker, info in optional_markers.items():
parser.addoption("--{}".format(marker), action="store_true",
default=False, help=info['help'])
def pytest_configure(config):
for marker, info in optional_markers.items():
config.addinivalue_line("markers",
"{}: {}".format(marker, info['marker-descr']))
def pytest_collection_modifyitems(config, items):
for marker, info in optional_markers.items():
if not config.getoption("--{}".format(marker)):
skip_test = pytest.mark.skip(
reason=info['skip-reason'].format(marker)
)
for item in items:
if marker in item.keywords:
item.add_marker(skip_test)
Now you can use the markers defined in optional_markers in your test modules:
# content of test_module.py
import pytest
#pytest.mark.flag1
def test_some_func():
pass
#pytest.mark.flag2
def test_other_func():
pass
If the use-case prohibits modifying either conftest.py and/or pytest.ini, here's how to use environment variables to directly take advantage of the skipif marker.
test_reports.py contents:
import os
import pytest
#pytest.mark.skipif(
not os.environ.get("MY_SPECIAL_FLAG"),
reason="MY_SPECIAL_FLAG not set in environment"
)
def test_skip_if_no_cli_tag():
assert True
def test_always_run():
assert True
In Windows:
> pytest -v test_reports.py --no-header
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============== 1 passed, 1 skipped in 0.01s ==============
> cmd /c "set MY_SPECIAL_FLAG=1&pytest -v test_reports.py --no-header"
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
=================== 2 passed in 0.01s ====================
In Linux or other *NIX-like systems:
$ pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============ 1 passed, 1 skipped in 0.00s =============
$ MY_SPECIAL_FLAG=1 pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
================== 2 passed in 0.00s ==================
MY_SPECIAL_FLAG can be whatever you wish based on your specific use-case and of course --no-header is just being used for this example.
Enjoy.
I've technically already solved the problem I was working on, but I can't help but feel like my solution is ugly:
I've got a pytest suite that I can run in two modes: Local Mode (for developing tests; everything just runs on my dev box through Chrome), and Seriousface Regression Testing Mode (for CI; the suite gets run on a zillion browsers and OSes). I've got a command-line flag to toggle between the two modes, --test-local. If it's there, I run in local mode. If it's not there, I run in seriousface mode. Here's how I do it:
# contents of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--test-local", action="store_true", default=False, help="run locally instead of in seriousface mode")
def pytest_generate_tests(metafunc):
if "dummy" in metafunc.fixturenames:
if metafunc.config.getoption("--test-local"):
driverParams = [(True, None)]
else:
driverParams = [(False, "seriousface setting 1"), (False, "seriousface setting 2")]
metafunc.parameterize("dummy", driverParams)
#pytest.fixture(scope="function")
def driver(dummy):
_driver = makeDriverStuff(dummy[0], dummy[1])
yield _driver
_driver.cleanup()
#pytest.fixture
def dummy():
pass
The problem is, that dummy fixture is hideous. I've tried having pytest_generate_tests parameterize the driver fixture directly, but it ends up replacing the fixture rather than just feeding stuff into it, so cleanup() never gets called when the test finishes. Using the dummy lets me replace the dummy with my parameter tuple, so that that gets passed into driver().
But, to reiterate, what I have does work, it just feels like a janky hack.
You can try a different approach: instead of dynamically selecting the parameter set for the test, declare ALL parameter sets on it, but deselect the irrelevant ones at launch time.
# r.py
import pytest
real = pytest.mark.real
mock = pytest.mark.mock
#pytest.mark.parametrize('a, b', [
real((True, 'serious 1')),
real((True, 'serious 2')),
mock((False, 'fancy mock')),
(None, 'always unmarked!'),
])
def test_me(a, b):
print([a, b])
Then run it as follows:
pytest -ra -v -s r.py -m real # strictly marked sets
pytest -ra -v -s r.py -m mock # strictly marked sets
pytest -ra -v -s r.py -m 'not real' # incl. non-marked sets
pytest -ra -v -s r.py -m 'not mock' # incl. non-marked sets
Also, skipif marks can be used for the selected parameter set (different from deselecting). They are natively supported by pytest. But the syntax is quite ugly. See more in the pytest's Skip/xfail with parametrize section.
The official manual also contains exactly the same case as in your question in Custom marker and command line option to control test runs. However, it is also not as elegant as -m test deselection, and is more suitable for complex run-time conditions, not on the apriori known structure of the tests.
given that this is my test code:
# conftest.py
#pytest.fixture(scope='function')
def fixA(request)
pass
#pytest.fixture(scope='function')
def fixB(request)
pass
# test_my.py
pytestmark = pytest.mark.usefixtures("fixA")
def test_something():
pass
I want to be able to use fixB() insted of fixA(), using fixB() is easy enough, I could add pytest.ini like this:
[pytest]
usefixtures=fixB
but I could figure out how can I disable fixA, from the commandline of from the configuration.
is my use case is so far fetched ?
(the actual reason, I want to keep fixA working in our CI system, but for day to day work, I need people to be able to disable it on their desk)
I've found a way, (after fighting it in the debugger, py.test documentation isn't that clear regarding how fixtures are being selected)
Added this into the conftest.py:
# conftest.py
def pytest_runtest_setup(item):
for fixture in item.config.getini('ignorefixtures'):
del item._fixtureinfo.name2fixturedefs[fixture]
item._fixtureinfo.names_closure.remove(fixture)
def pytest_addoption(parser):
parser.addini('ignorefixtures', 'fixtures to ignore', 'linelist')
and then I could put this in pytest.ini:
[pytest]
usefixtures=fixB
ignorefixtures=fixA
I would be nice to those kind of things also in the command line...