The initial scenerio is writing tests for functions from a library (lib.py).
lib.py:
def fun_x(val):
# does something with val
return result
def fun(val):
x = fun_x(val)
# does seomthing with x
return result
test__lib.py
import pytest
import lib
def lib_fun_x_mocked(val):
return "qrs"
def test_fun():
assert lib.fun("abc") == "xyz"
But lib.fun_x() does something very expensive or requires a resource not reliably available or not determinisitc. So I want to subsitute it with a mock function such that when the test test_fun() is executed lib.fun() uses lib_fun_x_mocked() instead of fun_x() from its local scope.
So far I'm running into cryptic error messages when I try to apply mock/patch recipes.
You can use the built-in fixture monkeypatch provided by pytest.
import lib
def lib_fun_x_mocked(some_val): # still takes an argument
return "qrs"
def test_fun(monkeypatch):
with monkeypatch.context() as mc:
mc.setattr(lib, 'fun_x', lib_fun_x_mocked)
result = lib.fun('abc')
assert result == 'qrs'
Also as a side note, if you are testing the function fun you shouldn't be asserting the output of fun_x within that test. You should be asserting that fun behaves in the way that you expect given a certain value is returned by fun_x.
Related
I have one function like this one:
def function(df, path_write):
df['A'] = df['col1'] * df['col2']
write(df, path)
The function is not that simple but the question is, how can i make a unit test if the function do not return any value??
If the function returns the new df it's simple, just make:
assert_frame_equal from the library from pandas.testing import assert_frame_equal and mock the write method.
But without that return, how can i test the df line??
In general, I can think of only two kinds of functions: Those that return a value and those that produce side-effects.
With the second kind, you typically don't want the side-effects to actually happen during testing. For example, if your function writes something to disk or sends some data to some API on the internet, you usually don't want it to actually do that during the test, you only want to ensure that it attempts to do the right thing.
To roll with the example of disk I/O as a side-effect: You would usually have some function that does the actual writing to the filesystem that the function under testing calls. Let's say it is named write. The typical apporach would be to mock that write function in your test. Then you would need to verify that that mocked write was called with the arguments you expected.
Say you have the following code.py for example:
def write(thing: object, path: str) -> None:
print("Some side effect like disk I/O...")
def function(thing: object, file_name: str) -> None:
...
directory = "/tmp/"
write(thing, path=directory + file_name)
To test function, I would suggest the following test.py:
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class MyTestCase(TestCase):
#patch.object(code, "write")
def test_function(self, mock_write: MagicMock) -> None:
test_thing = object()
test_file_name = "test.txt"
self.assertIsNone(code.function(test_thing, test_file_name))
mock_write.assert_called_once_with(
test_thing,
path="/tmp/" + test_file_name,
)
Check out unittest.mock for more details on mocking with the standard library. I would strongly advise to use the tools there and not do custom monkey-patching. The latter is certainly possible, but always carries the risk that you forget to revert the patched objects back to their original state after every test. That can break the entire rest of your test cases and depending on how you monkey-patched, the source of the resulting errors may become very hard to track down.
Hope this helps.
In the mock for write() you can add assert statements to ensure the form of df is as you would expect. For example:
def _mock_write(df, path):
assert path == '<expected path value>'
assert_frame_equal(df, <expected dataframe>)
So the full test case would be:
def test_function(self, monkeypatch):
# Define mock function
def _mock_write(df, path):
assert path == '<expected path value>'
assert_frame_equal(df, <expected dataframe>)
# Mock write function
monkepyatch.setattr(<MyClass>, 'write', _mock_write)
# Run function to enter mocked write function
function(test_df, test_path_write)
N.B. This is assuming you are using pytest as your test runner which supports the set up and tear down of monkeypatch. Other answers show the usage for the standard unittest framework.
I would like to use fixtures as arguments of pytest.mark.parametrize or something that would have the same results.
For example:
import pytest
import my_package
#pytest.fixture
def dir1_fixture():
return '/dir1'
#pytest.fixture
def dir2_fixture():
return '/dir2'
#pytest.parametrize('dirname, expected', [(dir1_fixture, 'expected1'), (dir2_fixture, 'expected2')])
def test_directory_command(dirname, expected):
result = my_package.directory_command(dirname)
assert result == expected
The problem with fixture params is that every param of the fixture will get run every time it's used, but I don't want that. I want to be able to choose which fixtures will get used depending on the test.
Will was on the right path, you should use request.getfixturevalue to retrieve the fixture.
But you can do it right in the test, which is simpler.
#pytest.mark.parametrize('dirname, expected', [
('dir1_fixture', 'expected1'),
('dir2_fixture', 'expected2')])
def test_directory_command(dirname, expected, request):
result = my_package.directory_command(request.getfixturevalue(dirname))
assert result == expected
Another way is to use lazy-fixture plugin:
#pytest.mark.parametrize('dirname, expected', [
(pytest.lazy_fixture('dir1_fixture'), 'expected1'),
(pytest.lazy_fixture('dir2_fixture'), 'expected2')])
def test_directory_command(dirname, expected):
result = my_package.directory_command(dirname)
assert result == expected
If you're on pytest 3.0 or later, I think you should be able to solve this particular scenario by writing a fixture using getfixturevalue:
#pytest.fixture(params=['dir1_fixture', 'dir2_fixture'])
def dirname(request):
return request.getfixturevalue(request.param)
However, you can't use this approach if the fixture you're attempting to dynamically load is parametrized.
Alternatively, you might be able to figure something out with the pytest_generate_tests hook. I haven't been able to bring myself to look into that much, though.
This isn't currently supported by pytest. There is an open feature request for it though (which has been opened in 2013).
As for now, my only solution is to create a fixture that returns a dictionary of fixtures.
import pytest
import my_package
#pytest.fixture
def dir1_fixture():
return '/dir1'
#pytest.fixture
def dir2_fixture():
return '/dir2'
#pytest.fixture
def dir_fixtures(
dir1_fixture,
dir2_fixture
):
return {
'dir1_fixture': dir1_fixture,
'dir2_fixture': dir2_fixture
}
#pytest.mark.parametrize('fixture_name, expected', [('dir1_fixture', 'expected1'), ('dir2_fixture', 'expected2')])
def test_directory_command(dir_fixtures, fixture_name, expected):
dirname = dir_fixtures[fixture_name]
result = my_package.directory_command(dirname)
assert result == expected
Not the best since it does not use a solution built into pytest, but it works for me.
DO NOT TRY TO CHANGE FIXTURE PARAMETERS DURING TEST EXECUTION
Invalid example: #pytest.fixture(scope="class", params=other_fixture)
Now I'll explain why it doesn't work:
Pytest creates session objects before running the test, containing the parameters with which the test will run. During the execution of the test; you cannot change the parameters
If you really want to do this (change the parameters dynamically), you can use an intermediate text file: "params.txt".
Example: #pytest.fixture(scope="class", params=json.load(open("topics.txt"))).
Again, you will not be able to change the content of the file during the test; because if you change it; will not be visible in the test. To do this; we need to change the contents of the file when the program starts and before the session objects are created. To do that; define a method pytest_sessionstart(session) in conftest.py where you change the file content.
For more details; check this documentation: How to run a method before all tests in all classes? and https://docs.pytest.org/en/6.2.x/reference.html#pytest.hookspec.pytest_sessionstart
I am trying to create a fixure that simply prints the arguments of a pytest test case.
For example:
#pytest.fixture(scope='function')
def print_test_function_arguments(request):
# Get the value of argument_1 from the run of the test function
print(f'argument_1 = {value_1}')
def test_something(print_test_function_arguments, argument_1, argument_2):
assert False
If you want to do any kind of introspection, request fixture is the way to go. request.node gives you the current test item, request.node.function the test_something function object and request.getfixturevalue("spam") will evaluate the fixture spam and return its result (or take it from fixture cache if already evaluated before). A simple args introspection example (untested):
import inspect
import pytest
#pytest.fixture(scope='function')
def print_test_function_arguments(request):
argspec = inspect.getfullargspec(request.node.function)
positional_args = argspec.args
positional_args.remove("print_test_function_arguments")
for argname in positional_args:
print(argname, "=", request.getfixturevalue(argname))
Of course, you can't evaluate the fixture print_test_function_arguments in its body, otherwise it will stuck in an infinite recursion, so its name must be removed from arguments list first.
So let's say I have this bit of code:
import coolObject
def doSomething():
x = coolObject()
x.coolOperation()
Now it's a simple enough method, and as you can see we are using an external library(coolObject).
In unit tests, I have to create a mock of this object that roughly replicates it. Let's call this mock object coolMock.
My question is how would I tell the code when to use coolMock or coolObject? I've looked it up online, and a few people have suggested dependency injection, but I'm not sure I understand it correctly.
Thanks in advance!
def doSomething(cool_object=None):
cool_object = cool_object or coolObject()
...
In you test:
def test_do_something(self):
cool_mock = mock.create_autospec(coolObject, ...)
cool_mock.coolOperation.side_effect = ...
doSomthing(cool_object=cool_mock)
...
self.assertEqual(cool_mock.coolOperation.call_count, ...)
As Dan's answer says, one option is to use dependency injection: have the function accept an optional argument, if it's not passed in use the default class, so that a test can pass in a moc.
Another option is to use the mock library (here or here) to replace your coolObject.
Let's say you have a foo.py that looks like
from somewhere.else import coolObject
def doSomething():
x = coolObject()
x.coolOperation()
In your test_foo.py you can do:
import mock
def test_thing():
path = 'foo.coolObject' # The fully-qualified path to the module, class, function, whatever you want to mock.
with mock.patch('foo.coolObject') as m:
doSomething()
# Whatever you want to assert here.
assert m.called
The path you use can include properties on objects, e.g. module1.module2.MyClass.my_class_method. A big gotcha is that you need to mock the object in the module being tested, not where it is defined. In the example above, that means using a path of foo.coolObject and not somwhere.else.coolObject.
So I'm running py.test and trying to use monkeypatch. I understand that monkeypatch's intended purpose is to replace attributes in a module so that they can be tested. And I get that we can substitute in mock functions in order to do this.
Currently I am trying to run essentially the following block of code.
from src.module.submodule import *
def mock_function(parameter = None):
return 0
def test_function_works(monkeypatch):
monkeypatch.setattr("src.module.submodule.function",mock_function ]
assert function(parameter = None) == 0
When the test runs, instead of swapping in mock_function, it just runs function . Could there be a reason why monkeypatch isn't activating
I have got monkey patch running succesfully with other code before. So I don't see why this isn't working.
I haven't used pytest for this stuff, but I know that with the mock library, functions are patched in the namespace where they're called. i.e. from src.module.submodule import * imports src.module.submodule.function into your namespace, but you then patch it in its original namespace, so your local name for the function still accesses the original, unpatched code.
If you change this to
import src.module.submodule
def mock_function(parameter = None):
return 0
def test_function_works(monkeypatch):
monkeypatch.setattr("src.module.submodule.function",mock_function ]
assert src.module.submodule.function(parameter = None) == 0
does it succeed?
Looks like a typo, shouldn't it be
monkeypatch.setattr("src.module.submodule.function",mockIfunction)
i.e. mockIfunction instead of mock_function?