I would like to use fixtures as arguments of pytest.mark.parametrize or something that would have the same results.
For example:
import pytest
import my_package
#pytest.fixture
def dir1_fixture():
return '/dir1'
#pytest.fixture
def dir2_fixture():
return '/dir2'
#pytest.parametrize('dirname, expected', [(dir1_fixture, 'expected1'), (dir2_fixture, 'expected2')])
def test_directory_command(dirname, expected):
result = my_package.directory_command(dirname)
assert result == expected
The problem with fixture params is that every param of the fixture will get run every time it's used, but I don't want that. I want to be able to choose which fixtures will get used depending on the test.
Will was on the right path, you should use request.getfixturevalue to retrieve the fixture.
But you can do it right in the test, which is simpler.
#pytest.mark.parametrize('dirname, expected', [
('dir1_fixture', 'expected1'),
('dir2_fixture', 'expected2')])
def test_directory_command(dirname, expected, request):
result = my_package.directory_command(request.getfixturevalue(dirname))
assert result == expected
Another way is to use lazy-fixture plugin:
#pytest.mark.parametrize('dirname, expected', [
(pytest.lazy_fixture('dir1_fixture'), 'expected1'),
(pytest.lazy_fixture('dir2_fixture'), 'expected2')])
def test_directory_command(dirname, expected):
result = my_package.directory_command(dirname)
assert result == expected
If you're on pytest 3.0 or later, I think you should be able to solve this particular scenario by writing a fixture using getfixturevalue:
#pytest.fixture(params=['dir1_fixture', 'dir2_fixture'])
def dirname(request):
return request.getfixturevalue(request.param)
However, you can't use this approach if the fixture you're attempting to dynamically load is parametrized.
Alternatively, you might be able to figure something out with the pytest_generate_tests hook. I haven't been able to bring myself to look into that much, though.
This isn't currently supported by pytest. There is an open feature request for it though (which has been opened in 2013).
As for now, my only solution is to create a fixture that returns a dictionary of fixtures.
import pytest
import my_package
#pytest.fixture
def dir1_fixture():
return '/dir1'
#pytest.fixture
def dir2_fixture():
return '/dir2'
#pytest.fixture
def dir_fixtures(
dir1_fixture,
dir2_fixture
):
return {
'dir1_fixture': dir1_fixture,
'dir2_fixture': dir2_fixture
}
#pytest.mark.parametrize('fixture_name, expected', [('dir1_fixture', 'expected1'), ('dir2_fixture', 'expected2')])
def test_directory_command(dir_fixtures, fixture_name, expected):
dirname = dir_fixtures[fixture_name]
result = my_package.directory_command(dirname)
assert result == expected
Not the best since it does not use a solution built into pytest, but it works for me.
DO NOT TRY TO CHANGE FIXTURE PARAMETERS DURING TEST EXECUTION
Invalid example: #pytest.fixture(scope="class", params=other_fixture)
Now I'll explain why it doesn't work:
Pytest creates session objects before running the test, containing the parameters with which the test will run. During the execution of the test; you cannot change the parameters
If you really want to do this (change the parameters dynamically), you can use an intermediate text file: "params.txt".
Example: #pytest.fixture(scope="class", params=json.load(open("topics.txt"))).
Again, you will not be able to change the content of the file during the test; because if you change it; will not be visible in the test. To do this; we need to change the contents of the file when the program starts and before the session objects are created. To do that; define a method pytest_sessionstart(session) in conftest.py where you change the file content.
For more details; check this documentation: How to run a method before all tests in all classes? and https://docs.pytest.org/en/6.2.x/reference.html#pytest.hookspec.pytest_sessionstart
Related
I have one function like this one:
def function(df, path_write):
df['A'] = df['col1'] * df['col2']
write(df, path)
The function is not that simple but the question is, how can i make a unit test if the function do not return any value??
If the function returns the new df it's simple, just make:
assert_frame_equal from the library from pandas.testing import assert_frame_equal and mock the write method.
But without that return, how can i test the df line??
In general, I can think of only two kinds of functions: Those that return a value and those that produce side-effects.
With the second kind, you typically don't want the side-effects to actually happen during testing. For example, if your function writes something to disk or sends some data to some API on the internet, you usually don't want it to actually do that during the test, you only want to ensure that it attempts to do the right thing.
To roll with the example of disk I/O as a side-effect: You would usually have some function that does the actual writing to the filesystem that the function under testing calls. Let's say it is named write. The typical apporach would be to mock that write function in your test. Then you would need to verify that that mocked write was called with the arguments you expected.
Say you have the following code.py for example:
def write(thing: object, path: str) -> None:
print("Some side effect like disk I/O...")
def function(thing: object, file_name: str) -> None:
...
directory = "/tmp/"
write(thing, path=directory + file_name)
To test function, I would suggest the following test.py:
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class MyTestCase(TestCase):
#patch.object(code, "write")
def test_function(self, mock_write: MagicMock) -> None:
test_thing = object()
test_file_name = "test.txt"
self.assertIsNone(code.function(test_thing, test_file_name))
mock_write.assert_called_once_with(
test_thing,
path="/tmp/" + test_file_name,
)
Check out unittest.mock for more details on mocking with the standard library. I would strongly advise to use the tools there and not do custom monkey-patching. The latter is certainly possible, but always carries the risk that you forget to revert the patched objects back to their original state after every test. That can break the entire rest of your test cases and depending on how you monkey-patched, the source of the resulting errors may become very hard to track down.
Hope this helps.
In the mock for write() you can add assert statements to ensure the form of df is as you would expect. For example:
def _mock_write(df, path):
assert path == '<expected path value>'
assert_frame_equal(df, <expected dataframe>)
So the full test case would be:
def test_function(self, monkeypatch):
# Define mock function
def _mock_write(df, path):
assert path == '<expected path value>'
assert_frame_equal(df, <expected dataframe>)
# Mock write function
monkepyatch.setattr(<MyClass>, 'write', _mock_write)
# Run function to enter mocked write function
function(test_df, test_path_write)
N.B. This is assuming you are using pytest as your test runner which supports the set up and tear down of monkeypatch. Other answers show the usage for the standard unittest framework.
Background
We have a big suite of reusable test cases which we run in different environments. For our suite with test case ...
#pytest.mark.system_a
#pytest.mark.system_b
# ...
#pytest.mark.system_z
test_one_thing_for_current_system():
assert stuff()
... we execute pytest -m system_a on system A, pytest -m system_b on system B, and so on.
Goal
We want to parametrize multiple test case for one system only, but neither want to copy-paste those test cases, nor generate the parametrization dynamically based on the command line argument pytest -m.
Our Attempt
Copy And Mark
Instead of copy-pasting the test case, we assign the existing function object to a variable. For ...
class TestBase:
#pytest.mark.system_a
def test_reusable_thing(self):
assert stuff()
class TestReuse:
test_to_be_altered = pytest.mark.system_z(TestBase.test_reusable_thing)
... pytest --collect-only shows two test cases
TestBase.test_reusable_thing
TestReuse.test_to_be_altered
However, pytest.mark on one of the cases also affects the other one. Therefore, both cases are marked as system_a and system_z.
Using copy.deepcopy(TestBase.test_reusable_thing) and changing __name__ of the copy, before adding mark does not help.
Copy And Parametrize
Above example is only used for illustration, as it does not actually alter the test case. For our usecase we tried something like ...
class TestBase:
#pytest.fixture
def thing(self):
return 1
#pytest.mark.system_a
# ...
#pytest.mark.system_y
def test_reusable_thing(self, thing, lots, of, other, fixtures):
# lots of code
assert stuff() == thing
copy_of_test = copy.deepcopy(TestBase.test_reusable_thing)
copy_of_test.__name__ = "test_altered"
class TestReuse:
test_altered = pytest.mark.system_z(
pytest.mark.parametrize("thing", [1, 2, 3])(copy_of_test)
)
Because of aforementioned problem, this parametrizes test_reusable_thing for all systems while we only wanted to parametrize the copy for system_z.
Question
How can we parametrize test_reusable_thing for system_z ...
without changing the implementation of test_reusable_thing,
and without changing the implementation of fixture thing,
and without copy-pasting the implementation of test_reusable_thing
and without manually creating a wrapper function def test_altered for which we have to copy-paste requested fixtures only to pass them to TestBase().test_reusable_thing(thing, lots, of, other, fixtures).
Somehow pytest has to link the copy to the original. If we know how (e.g. based on a variable like __name__) we could break the link.
You can defer the parametrization to the pytest_generate_tests hookimpl. You can use that to add your custom logic for implicit populating of test parameters, e.g.
def pytest_generate_tests(metafunc):
# test has `my_arg` in parameters
if 'my_arg' in metafunc.fixturenames:
marker_for_system_z = metafunc.definition.get_closest_marker('system_z')
# test was marked with `#pytest.mark.system_z`
if marker_for_system_z is not None:
values_for_system_z = some_data.get('z')
metafunc.parametrize('my_arg', values_for_system_z)
A demo example to pass the marker name to test_reusable_thing via a system arg:
import pytest
def pytest_generate_tests(metafunc):
if 'system' in metafunc.fixturenames:
args = [marker.name for marker in metafunc.definition.iter_markers()]
metafunc.parametrize('system', args)
class Tests:
#pytest.fixture
def thing(self):
return 1
#pytest.mark.system_a
#pytest.mark.system_b
#pytest.mark.system_c
#pytest.mark.system_z
def test_reusable_thing(self, thing, system):
assert system.startswith('system_')
Running this will yield four tests in total:
test_spam.py::Tests::test_reusable_thing[system_z] PASSED
test_spam.py::Tests::test_reusable_thing[system_c] PASSED
test_spam.py::Tests::test_reusable_thing[system_b] PASSED
test_spam.py::Tests::test_reusable_thing[system_a] PASSED
It seems it it possible to pass argument to fixtures:
Pass a parameter to a fixture function
yet, when implementing this minimal example, I get an error.
import pytest
#pytest.fixture
def my_fixture(v):
print("fixture in")
yield v+1
print("fixture out")
#pytest.mark.parametrize("my_fixture",[1], indirect=True)
def test_myfixture(my_fixture):
print(my_fixture)
#pytest.fixture
def my_fixture(v):
E fixture 'v' not found
Is anything wrong with the code above ?
(python 3.8.10, pytest-6.2.5)
To briefly elaborate on Vince's answer: in general, arguments to fixture functions are interpreted as the names of other fixtures to load before the one being defined. That's why you got the error message fixture 'v' not found: pytest thought that you wanted my_fixture to depend on another fixture called v that doesn't exist.
The request argument is an exception to this general rule. It doesn't refer to another fixture. Instead, it instructs pytest to give the fixture access to the request object, which contains information on the currently running test, e.g. which parameters are being tested (via request.param), which markers the test has (via request.node.get_closest_marker()), etc.
So to make use of indirect parametrization, your fixture needs to (i) accept the request argument and (ii) do something with request.param. For more information, here are the relevant pages in the documentation:
The request argument
Parametrized fixtures
Indirect parametrization
This works:
#pytest.fixture
def my_fixture(request):
print("fixture in")
yield request.param+1 # difference here !
print("fixture out")
#pytest.mark.parametrize("my_fixture",[1], indirect=True)
def test_myfixture(my_fixture):
print(my_fixture)
There is nothing wrong. Fixtures are used to fix constants to reuse them identically in multiple tests. They don't accept any argument. By the way you can create a pytest.fixture that is a "constant" function:
#pytest.fixture
def my_fixture():
return lambda x: x+1
And avoid print in fixture (on the final version at least).
The initial scenerio is writing tests for functions from a library (lib.py).
lib.py:
def fun_x(val):
# does something with val
return result
def fun(val):
x = fun_x(val)
# does seomthing with x
return result
test__lib.py
import pytest
import lib
def lib_fun_x_mocked(val):
return "qrs"
def test_fun():
assert lib.fun("abc") == "xyz"
But lib.fun_x() does something very expensive or requires a resource not reliably available or not determinisitc. So I want to subsitute it with a mock function such that when the test test_fun() is executed lib.fun() uses lib_fun_x_mocked() instead of fun_x() from its local scope.
So far I'm running into cryptic error messages when I try to apply mock/patch recipes.
You can use the built-in fixture monkeypatch provided by pytest.
import lib
def lib_fun_x_mocked(some_val): # still takes an argument
return "qrs"
def test_fun(monkeypatch):
with monkeypatch.context() as mc:
mc.setattr(lib, 'fun_x', lib_fun_x_mocked)
result = lib.fun('abc')
assert result == 'qrs'
Also as a side note, if you are testing the function fun you shouldn't be asserting the output of fun_x within that test. You should be asserting that fun behaves in the way that you expect given a certain value is returned by fun_x.
I'm writing test system using py.test, and looking for a way to make particular tests execution depending on some other test run results.
For example, we have standard test class:
import pytest
class Test_Smoke:
def test_A(self):
pass
def test_B(self):
pass
def test_C(self):
pass
test_C() should be executed if test_A() and test_B() were passed, else - skipped.
I need a way to do something like this on test or test class level (e.g. Test_Perfo executes, if all of Test_Smoke passed), and I'm unable to find a solution using standard methods (like #pytest.mark.skipif).
Is it possible at all with pytest?
You might want to have a look at pytest-dependency. It is a plugin that allows you to skip some tests if some other test had failed.
Consider also pytest-depends:
def test_build_exists():
assert os.path.exists(BUILD_PATH)
#pytest.mark.depends(on=['test_build_exists'])
def test_build_version():
# this will skip if `test_build_exists` fails...