I have multiple conftest.py files in different test directories with each have slight variations.
e.g.
/test
/test1
conftest.py
test_test1.py
/test2
conftest.py
test_test2.py
Each conftest has pytest_addoption(parser) and parameters(request) fixture with differences and similarities in args. eg.
test1/conftest.py
def pytest_addoption(parser):
parser.addoption("--a", required=True)
parser.addoption("--b")
#pytest.fixture
def parameters(request):
parameters = {}
parameters["a"] = request.config.getoption("--a")
...
test2/conftest.py
def pytest_addoption(parser):
parser.addoption("--a")
parser.addoption("--b")
#pytest.fixture
def parameters(request):
parameters = {}
parameters["a"] = request.config.getoption("--a")
...
Is there a way to combine these conftest.py files into one at the parent directory? I heard of making a fixture with scope package but not sure how that would work.
Related
Can't wrap my mind around how to test my function which search for files.
My current tree:
project
- etls
-- app.py
-- pipeline_1
---- pipeline_1.yaml
-- pipeline_2
---- pipeline_2.yaml
- tests
-- etl
--- unit
---- tests_appfile.py
- conftest.py
In my tests_appfile.py I would like to test function from app.py
The function is:
def get_yaml_path(job_name: str) -> str:
list_of_paths = list(Path("etls").rglob(f"{job_name}.yaml"))
if len(list_of_paths) > 1:
raise ValueError(
f"Number of paths > 1 (actual value: {len(list_of_paths)}. Can't decide which pipeline to run"
)
elif len(list_of_paths) == 0:
raise ValueError(f"There is no YAML files for that {job_name}")
else:
return str(list_of_paths[0])
So, I run app.py with param job_name, function have to find specific YAML for that job.
I want to test it and the main caveat here is that 'etls' is hardcoded path here. My ideas are:
to create fixture that creates fake folders and YAMLs for test
change workdir to temd_dir from pytest during tests and create there 'etls' folder and etc.
Which approach is more efficient considering I will need this YAMLS for another tests and how to implement them?
Found the answer:
Created the fixture
#pytest.fixture
def test_appfile_yaml_files(tmp_path):
directory = tmp_path / "etls/test_pipeline"
directory.mkdir(parents=True)
file_path = directory / "test_pipeline.yaml"
file_path.write_text("this is temp yaml file")
return file_path
And then monkeypatch the working dir
def test_get_yaml_path(test_appfile_yaml_files, tmp_path, monkeypatch):
monkeypatch.chdir(tmp_path)
assert get_yaml_path("test_pipeline") == "etls/test_pipeline/test_pipeline.yaml"
I have 2 different test files and some fixtures in my conftest.py:
1)"Test_dummy.py" which contains this function:
def test_nothing():
return 1
2)"Test_file.py". which contains this function:
def test_run(excelvalidation_io):
dfInput, expectedOutput=excelvalidation_io
output=run(dfInput)
for key, df in expectedOutput.items():
expected=df.fillna(0)
real=output[key].fillna(0)
assert expected.equals(real)
3)"conftest.py" which contains these fixtures:
def pytest_generate_tests(metafunc):
inputfiles=glob.glob(DATADIR+"**_input.csv", recursive=False)
iofiles=[(ifile, getoutput(ifile)) for ifile in
inputfiles]
metafunc.parametrize("csvio", iofiles)
#pytest.fixture
def excelvalidation_io(csvio):
dfInput, expectedOutput= csvio
return(dfInput, expectedOutput)
#pytest.fixture
def client():
client = app.test_client()
return client
When i run the tests, "Test_dummy.py" also tries to load the "excelvalidation_io" fixture and it generates error:
In test_nothing: function uses no argument 'csvio'
I have tried to place just the fixture inside the "Test_file.py" and the problem is solved, but i read that it's a good practice to locate all the fixtures in the conftest file.
The function pytest_generate_tests is a special function that is always called before executing any test, so in this case you need to check if the metafunc accepts an argument named "csvio" and do nothing otherwise, as in:
def pytest_generate_tests(metafunc):
if "excelvalidation_io" in metafunc.fixturenames:
inputfiles=glob.glob(DATADIR+"**_input.csv", recursive=False)
iofiles=[(ifile, getoutput(ifile)) for ifile in
inputfiles]
metafunc.parametrize("csvio", iofiles)
Source
I'm trying pytest parametrization with pytest_generate_tests():
conftest.py
def pytest_generate_tests(metafunc):
if 'cliautoconfigargs' in metafunc.fixturenames:
metafunc.parametrize(
'cliautoconfigargs', list(<some list of params>))
)
test_cliautoconfig.py
def test_check_conf_mode(cliautoconfigargs):
assert True
def test_enable_disable_command(cliautoconfigargs):
assert True
In such configuration each test running with all its parameters and only after it completed the next test with its parameters starting. I'd like to configure testing in such way, when all tests should be cyclicaly run with their first parameter, then with second parameter, and so on.
For example a have the following output:
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs0]
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs1]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs0]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs1]
I want to have the next one:
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs0]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs0]
test_cliautoconfig.py::test_check_conf_mode[cliautoconfigargs1]
test_cliautoconfig.py::test_enable_disable_command[cliautoconfigargs1]
Sory for issue dublication.
Found answer in maintaining order of test execution when parametrizing tests in test class
conftest.py
def pytest_generate_tests(metafunc):
if 'cliautoconfigargs' in metafunc.fixturenames:
metafunc.parametrize(
'cliautoconfigargs', list(<some list of params>), scope="class"
)
test_cliautoconfig.py
class TestCommand:
def test_check_conf_mode(self, cliautoconfigargs):
assert True
def test_enable_disable_command(self, cliautoconfigargs):
assert True
I am using pytest to do software testing lately but am coming across a problem when dynamically parameterizing test fixtures. When testing, I would like to be able to provide the option to:
A) Test a specific file by specifying its file name
B) Test all files in the installed root directory
Below is my current conftest.py. What I want it to do is if you choose option A (--file_name), create a parameterized test fixture using the file name specified. If you choose option B (--all_files), provide a list of all the files as a parameterized test fixture.
import os
import pytest
def pytest_addoption(parser):
parser.addoption("--file_name", action="store", default=[], help="Specify file-under-test")
parser.addoption("--all_files", action="store_true", help="Option to test all files root directory")
#pytest.fixture(scope='module')
def file_name(request):
return request.config.getoption('--file_name')
def pytest_generate_tests(metafunc):
if 'file_name' in metafunc.fixturenames:
if metafunc.config.option.all_files:
all_files = list_all_files()
else:
all_files = "?"
metafunc.parametrize("file_name", all_files)
def list_all_files():
root_directory = '/opt/'
if os.listdir(root_directory):
# files have .cool extension that need to be split out
return [name.split(".cool")[0] for name in os.listdir(root_directory)
if os.path.isdir(os.path.join(root_directory, name))]
else:
print "No .cool files found in {}".format(root_directory)
The more I fiddle with this, I only can get one of the options working but not the other...what do I need to do to get both options (and possibly more) dynamically create parametrized test fixtures?
Are you looking for something like this?
def pytest_generate_tests(metafunc):
if 'file_name' in metafunc.fixturenames:
files = []
if metafunc.config.option.all_files:
files = list_all_files()
fn = metafunc.config.option.file_name
if fn:
files.append(fn)
metafunc.parametrize('file_name', all_files, scope='module')
No need to define file_name function.
is it possible to use pytest_addoption(parser) to create a list that is used by pytest.yield_fixture? i.e.
def pytest_addoption(parser):
parser.addoption("-foo", action="store",defaults="1,2,3")
#pytest.yield_fixture(params=request.config.getoption('--foo').split(','))
def test_bar(request):
do_something(request.param)
say you had 6 browsers, and you wanted to ability to run the tests against 1 browser as a quick check. I can't figure out how to get in place before test discovery/generation. Help
This obviously doesn't work since you the request variable does not exist in the global module scope, which is when the expression in the decorator is executed. The way to solve this is use the pytest_generate_tests hook:
# conftest.py
def pytest_addoption(parser):
parser.addoption('--foo', action='store', defaults='1,2,3')
def pytest_configure(config):
config.foo = config.getoption('foo').split(',')
def pytest_generate_tests(metafunc):
if 'foo' in metafunc.fixturenames:
metafunc.parametrize('foo', metafunc.config.foo)
# test_bar.py
def test_bar(foo):
do_something(foo)