pytest parametrization execution order for several tests - python

I'm trying pytest parametrization with pytest_generate_tests():
conftest.py
def pytest_generate_tests(metafunc):
if 'cliautoconfigargs' in metafunc.fixturenames:
metafunc.parametrize(
'cliautoconfigargs', list(<some list of params>))
)
test_cliautoconfig.py
def test_check_conf_mode(cliautoconfigargs):
assert True
def test_enable_disable_command(cliautoconfigargs):
assert True
In such configuration each test running with all its parameters and only after it completed the next test with its parameters starting. I'd like to configure testing in such way, when all tests should be cyclicaly run with their first parameter, then with second parameter, and so on.
For example a have the following output:
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs0]
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs1]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs0]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs1]
I want to have the next one:
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs0]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs0]
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs1]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs1]

Sory for issue dublication.
Found answer in maintaining order of test execution when parametrizing tests in test class
conftest.py
def pytest_generate_tests(metafunc):
if 'cliautoconfigargs' in metafunc.fixturenames:
metafunc.parametrize(
'cliautoconfigargs', list(<some list of params>), scope="class"
)
test_cliautoconfig.py
class TestCommand:
def test_check_conf_mode(self, cliautoconfigargs):
assert True
def test_enable_disable_command(self, cliautoconfigargs):
assert True

Related

How do I run a fixture only when the test fails?

I have the following example:
conftest.py:
#pytest.fixture:
def my_fixture_1(main_device)
yield
if FAILED:
-- code lines --
else:
pass
main.py:
def my_test(my_fixture_1):
main_device = ...
-- code lines --
assert 0
-- code lines --
assert 1
When assert 0, for example, the test should fail and execute my_fixture_1. If the test pass, the fixture must not execute. I tried using hookimpl but didn't found a solution, the fixture is always executing even if the test pass.
Note that main_device is the device connected where my test is running.
You could use request as an argument to your fixture. From that, you can check the status of the corresponding tests, i.e. if it has failed or not. In case it failed, you can execute the code you want to get executed on failure. In code that reads as
#pytest.fixture
def my_fixture_1(request):
yield
if request.session.testsfailed:
print("Only print if failed")
Of course, the fixture will always run but the branch will only be executed if the corresponding test failed.
In Simon Hawe's answer, request.session.testsfailed denotes the number of test failures in that particular test run.
Here is an alternative solution that I can think of.
import os
#pytest.fixture(scope="module")
def main_device():
return None
#pytest.fixture(scope='function', autouse=True)
def my_fixture_1(main_device):
yield
if os.environ["test_result"] == "failed":
print("+++++++++ Test Failed ++++++++")
elif os.environ["test_result"] == "passed":
print("+++++++++ Test Passed ++++++++")
elif os.environ["test_result"] == "skipped":
print("+++++++++ Test Skipped ++++++++")
def pytest_runtest_logreport(report):
if report.when == 'call':
os.environ["test_result"] = report.outcome
You can do your implementations directly in the pytest_runtest_logreport hook itself. But the drawback is that you won't get access to the fixtures other than the report.
So, if you need main_device, you have to go with a custom fixture like as shown above.
Use #pytest.fixture(scope='function', autouse=True) which will automatically run it for every test case. you don't have to give main_device in all test functions as an argument.

All my test functions are loading a fixture that is in the conftest.py, even when they don't need it

I have 2 different test files and some fixtures in my conftest.py:
1)"Test_dummy.py" which contains this function:
def test_nothing():
return 1
2)"Test_file.py". which contains this function:
def test_run(excelvalidation_io):
dfInput, expectedOutput=excelvalidation_io
output=run(dfInput)
for key, df in expectedOutput.items():
expected=df.fillna(0)
real=output[key].fillna(0)
assert expected.equals(real)
3)"conftest.py" which contains these fixtures:
def pytest_generate_tests(metafunc):
inputfiles=glob.glob(DATADIR+"**_input.csv", recursive=False)
iofiles=[(ifile, getoutput(ifile)) for ifile in
inputfiles]
metafunc.parametrize("csvio", iofiles)
#pytest.fixture
def excelvalidation_io(csvio):
dfInput, expectedOutput= csvio
return(dfInput, expectedOutput)
#pytest.fixture
def client():
client = app.test_client()
return client
When i run the tests, "Test_dummy.py" also tries to load the "excelvalidation_io" fixture and it generates error:
In test_nothing: function uses no argument 'csvio'
I have tried to place just the fixture inside the "Test_file.py" and the problem is solved, but i read that it's a good practice to locate all the fixtures in the conftest file.
The function pytest_generate_tests is a special function that is always called before executing any test, so in this case you need to check if the metafunc accepts an argument named "csvio" and do nothing otherwise, as in:
def pytest_generate_tests(metafunc):
if "excelvalidation_io" in metafunc.fixturenames:
inputfiles=glob.glob(DATADIR+"**_input.csv", recursive=False)
iofiles=[(ifile, getoutput(ifile)) for ifile in
inputfiles]
metafunc.parametrize("csvio", iofiles)
Source

How can I use a pytest fixture as a parametrize argument?

I have a pytest test like so:
email_two = generate_email_two()
#pytest.mark.parametrize('email', ['email_one#example.com', email_two])
def test_email_thing(self, email):
... # Run some test with the email parameter
Now, as part of refactoring, I have moved the line:
email_two = generate_email_two()
into its own fixture (in a conftest.py file), as it is used in various other places. However, is there some way, other than importing the fixture function directly, of referencing it in this test? I know that funcargs are normally the way of calling a fixture, but in this context, I am not inside a test function when I am wanting to call the fixture.
I ended up doing this by having a for loop in the test function, looping over each of the emails. Not the best way, in terms of test output, but it does the job.
import pytest
import fireblog.login as login
pytestmark = pytest.mark.usefixtures("test_with_one_theme")
class Test_groupfinder:
def test_success(self, persona_test_admin_login, pyramid_config):
emails = ['id5489746#mockmyid.com', persona_test_admin_login['email']]
for email in emails:
res = login.groupfinder(email)
assert res == ['g:admin']
def test_failure(self, pyramid_config):
fake_email = 'some_fake_address#example.com'
res = login.groupfinder(fake_email)
assert res == ['g:commenter']

pytest: how to ignore metafunc parameterize values for few tests in a class

I have a test written in python using pytest.
which has conftest.py for setup, which parameterize 5 account ids for each test.
The Test Class has total 5 tests, out of which 4 needs the tests to be parameterized using metafunc (Which i have done in conftest.py), the remaining 1 test need not be parameterized. please tell how to run all these tests in one go with avoiding parameterization for the last test (test5).
my conftest.py has the following;
def pytest_generate_tests(metafunc):
accid = ['string1','string2','string3','string4','string5']
metafunc.parametrize("accid", accid)
my test file is named test_accid.py; which has the following;
class TestAccount:
def test_base1(self, accid):
<test code>
def test_base2(self, accid):
<test code>
def test_base3(self, accid):
<test code>
def test_base4(self, accid):
<test code>
#The following test should not have accid
def test_no_accid(self):
<test code>
Resolved this by adding the following check in conftest.py;
if 'accid' in metafunc.fixturenames:
metafunc.parametrize("accid", accid)
So this will check if the accid is a parameter defined in test function, then only it will parameterize the test.
for my last test i removed the accid parameter, and its working now.

pytest.yield_fixture using cmd line options

is it possible to use pytest_addoption(parser) to create a list that is used by pytest.yield_fixture? i.e.
def pytest_addoption(parser):
parser.addoption("-foo", action="store",defaults="1,2,3")
#pytest.yield_fixture(params=request.config.getoption('--foo').split(','))
def test_bar(request):
do_something(request.param)
say you had 6 browsers, and you wanted to ability to run the tests against 1 browser as a quick check. I can't figure out how to get in place before test discovery/generation. Help
This obviously doesn't work since you the request variable does not exist in the global module scope, which is when the expression in the decorator is executed. The way to solve this is use the pytest_generate_tests hook:
# conftest.py
def pytest_addoption(parser):
parser.addoption('--foo', action='store', defaults='1,2,3')
def pytest_configure(config):
config.foo = config.getoption('foo').split(',')
def pytest_generate_tests(metafunc):
if 'foo' in metafunc.fixturenames:
metafunc.parametrize('foo', metafunc.config.foo)
# test_bar.py
def test_bar(foo):
do_something(foo)

Categories

Resources