How can I alias a pytest fixture? - python

I have a few pytest fixtures I use from third-party libraries and sometimes their names are overly long and cumbersome. Is there a way to create a short alias for them?
For example: the django_assert_max_num_queries fixture from pytest-django. I would like to call this max_queries in my tests.

You cannot just add an alias in the form of
max_queries = django_assert_max_num_queries
because fixtures are looked up by name at run-time and not imported (and even if they can be imported in some cases, this is not recommended).
But you can always write your own fixture that just yields another fixture:
#pytest.fixture
def max_queries(django_assert_max_num_queries):
yield django_assert_max_num_queries
Done this way, max_queries will behave exactly the same as django_assert_max_num_queries.
Note that you should use yield and not return, to make sure that the control is returned to the fixture.

Related

Why are pytest fixtures not meant to be called directly from within tests?

I understand that pytest fixtures raise an error when calling a fixture directly from within a test. But I don't fully understand why. For context, I am a junior dev new to python so I may be missing something obvious that needs explaining.
I have a fixture as follows:
#pytest.fixture()
def get_fixture_data_for(sub_directory: str, file_name: str) -> json:
file_path = Path(__file__).parent / f"{sub_directory}"
with open(file_path / f"{file_name" as j:
data = json.load(j)
return data
and then a test that says something like
def test_file_is_valid():
data = get_fixture_data_for('some_subdir', 'some_filename.json')
#do something with this information
...
I have many different test files that will use this fixture function to read data from the files and then use the data in posting to an endpoint for integration tests.
When running this, I get the error that fixtures are not meant to be called directly, and should be created automatically when test functions request them as parameters. But why?
I see in the docs this is mentioned: https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly but I don't understand why this functionality was deprecated.
Answering my own question for any newbies that come across this in future. As Michael alluded to in the comment above - what I am trying to do is use a helper function as a fixture.
Fixtures are loaded and run once (depending on the scope you provide it) when the test suite is run. They are for setting up the test. For example, if you had a dict that needed populating and loading, that would then be passed into tests; this would be best handled by a fixture.
However, if you wanted to manipulate some data generated within the test, you would use a helper function - as this is not something used to set up the test.

Pytest - How to assert whether a function have called monkeypatched methods

I have a complex function that calls many other 3rd party methods. I monkeypatched them out one by one:
import ThirdParty as tp
def my_method():
tp.func_3rd_party_1()
...
tp.func_3rd_party_5()
return "some_value"
In my test:
import pytest
def test_my_method(monkeypatch):
monkeypatch.setattr(ThirdParty, 'func_3rd_party_1', some_mock_1())
...
monkeypatch.setattr(ThirdParty, 'func_3rd_party_5', some_mock_5())
return_value = my_method()
assert return value
This runs just fine but the test feels too implicit for me in this form. I'd like to explicitly state that the monkeypatched methods were indeed called.
For the record, my mocked methods are not using any inbuilt Mock library resource. They are just redefined methods (smart stubs).
Is there any way to assert for that?
So the pytest monkeypatching fixture is specifically provided so you can change some global attributes like environment variables, stuff in third party libraries, etc, to provide some controlled and easy behavior for your test.
The Mock objects, on the other hand, are meant to provide all sorts of tracking and inspection on the object.
The two go hand in hand: You use patching to replace some third party function with a Mock object, then execute your code, and then ask the Mock object if it has indeed been invoked with the right arguments, for the right number of times.
Note that even though the mock module is part of unittest, it works perfectly fine with pytest.
Now as for the patching itself, it's up to your personal preference, and depends a bit on what exactly you want to patch, whether using unittest.mock.patch is more compact or pytest's monkeypatch fixture.
import pytest
from unittest.mock import Mock
def test_my_method(monkeypatch):
# refer to the mock module documentation for more complex
# set ups, where the mock object _also_ exhibits some behavior.
# as is, calling the function doesn't actually _do_ anything.
some_mock_1 = Mock()
...
some_mock_5 = Mock(return_value=66)
monkeypatch.setattr(ThirdParty, 'func_3rd_party_1', some_mock_1)
...
monkeypatch.setattr(ThirdParty, 'func_3rd_party_5', some_mock_5)
some_mock_1.assert_called_once()
some_mock_5.assert_called_with(42)
...
Now a note on this type of testing: Don't go overboard! It can quite easily lead to what's called brittle tests: Tests that break with the slightest change to your code. It can make refactoring an impossible neightmare.
These types of assertions are best when you use them in a message-focused object-oriented approach. If the whole point of the class or method under test is to invoke, in a particular way, the method or class of another object, then Mock away. If the calls to third party functions on the other hand are merely a means to an end, then go a level higher with your test and test for the desired behavior instead.

Module level fixture is not running

I want to have a specific setup/tear down fixture for one of the test modules. Obviously, I want it to run the setup code once before all the tests in the module, and once after all tests are done.
So, I've came up with this:
import pytest
#pytest.fixture(scope="module")
def setup_and_teardown():
print("Start")
yield
print("End")
def test_checking():
print("Checking")
assert True
This does not work that way. It will only work if I provide setup_and_teardown as an argument to the first test in the module.
Is this the way it's supposed to work? Isn't it supposed to be run automatically if I mark it as a module level fixture?
Module-scoped fixtures behave the same as fixtures of any other scope - they are only used if they are explicitely passed in a test, marked using #pytest.mark.usefixtures, or have autouse=True set:
#pytest.fixture(scope="module", autouse=True)
def setup_and_teardown():
print("setup")
yield
print("teardown")
For module- and session-scoped fixtures that do the setup/teardown as in your example, this is the most commonly used option.
For fixtures that yield an object (for example an expansive resource that shall only be allocated once) that is accessed in the test, this does not make sense, because the fixture has to be passed to the test to be accessible. Also, it may not be needed in all tests.

how to share a variable across modules for all tests in py.test

I have multiple tests run by py.test that are located in multiple classes in multiple files.
What is the simplest way to share a large dictionary - which I do not want to duplicate - with every method of every class in every file to be used by py.test?
In short, I need to make a "global variable" for every test. Outside of py.test, I have no use for this variable, so I don't want to store it in the files being tested. I made frequent use of py.test's fixtures, but this seems overkill for this need. Maybe it's the only way?
Update: pytest-namespace hook is deprecated/removed. Do not use. See #3735 for details.
You mention the obvious and least magical option: using a fixture. You can apply it to entire modules using pytestmark = pytest.mark.usefixtures('big_dict') in your module, but then it won't be in your namespace so explicitly requesting it might be best.
Alternatively you can assign things into the pytest namespace using the hook:
# conftest.py
def pytest_namespace():
return {'my_big_dict': {'foo': 'bar'}}
And now you have pytest.my_big_dict. The fixture is probably still nicer though.
There are tons of things I love about py.test, but one thing I absolutely HATE is how poorly it plays with code intelligence tools. I disagree that an autouse fixture to declare a variable is the "most clear" method in this case because not only does it completely baffle my linter, but also anyone else who is not familiar with how py.test works. There is a lot of magic there, imo.
So, one thing you can do that doesn't make your linter explode and doesn't require TestCase boilerplate is to create a module called globals. Inside this module, stub the names of the things you want global to {} or None and import the global module into your tests. Then in your conftest.py file, use the py.test hooks to set (or reset) your global variable(s) as appropriate. This has the advantage of giving you the stub to work with when building tests and the full data for the tests at runtime.
For example, you can use the pytest_configure() hook to set your dict right when py.test starts up. Or, if you wanted to make sure the data was pristine between each test, you could autouse a fixture to assign your global variable to your known state before each test.
# globals.py
my_data = {} # Create a stub for your variable
# test_module.py
import globals as gbl
def test_foo():
assert gbl.my_data['foo'] == 'bar' # The global is in the namespace when creating tests
# conftest.py
import globals as gbl
my_data = {'foo': 'bar'} # Create the master copy in conftest
#pytest.fixture(autouse=True)
def populate_globals():
gbl.my_data = my_data # Assign the master value to the global before each test
One other advantage to this approach is you can use type hinting in your globals module to give you code completion on the global objects in your test, which probably isn't necessary for a dict but I find it handy when I am using an object (such as webdriver). :)
I'm suprised no answer mentioned caching yet: since version 2.8, pytest has a powerful cache mechanism.
Usage example
#pytest.fixture(autouse=True)
def init_cache(request):
data = request.config.cache.get('my_data', None)
data = {'spam': 'eggs'}
request.config.cache.set('my_data', data)
Access the data dict in tests via builtin request fixture:
def test_spam(request):
data = request.config.cache.get('my_data')
assert data['spam'] == 'eggs'
Sharing the data between test runs
The cool thing about request.cache is that it is persisted on disk, so it can be even shared between test runs. This comes handy when you running tests distributed (pytest-xdist) or have some long-running data generation which does not change once generated:
#pytest.fixture(autouse=True)
def generate_data(request):
data = request.config.cache.get('my_data', None)
if data is None:
data = long_running_generation_function()
request.config.cache.set('my_data', data)
Now the tests won't need to recalculate the value on different test runs unless you clear the cache on disk explicitly. Take a look what's currently in the cache:
$ pytest --cache-show
...
my_data contains:
{'spam': 'eggs'}
Rerun the tests with the --cache-clear flag to delete the cache and force the data to be recalculated. Or just remove the .pytest_cache directory in the project root dir.
Where to go from here
The related section in pytest docs: Cache: working with cross-testrun state.
Having a big dictionary of globals that every test uses is probably a bad idea. If possible, I suggest refactoring your tests to avoid this sort of thing.
That said, here is how I would do it: define an autouse fixture that adds a reference to the dictionary in the global namespace of every function.
Here is some code. It's all in the same file, but you can move the fixture out to conftest.py at the top level of your tests.
import pytest
my_big_global = {'key': 'value'}
#pytest.fixture(autouse=True)
def myglobal(request):
request.function.func_globals['foo'] = my_big_global
def test_foo():
assert foo['key'] == 'value'
def test_bar():
assert foo['key'] == 'bar'
Here is the output from when I run this code:
$ py.test test_global.py -vv
======================================= test session starts =======================================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2 -- env/bin/python
collected 2 items
test_global.py:9: test_foo PASSED
test_global.py:12: test_bar FAILED
============================================ FAILURES =============================================
____________________________________________ test_bar _____________________________________________
def test_bar():
> assert foo['key'] == 'bar'
E assert 'value' == 'bar'
E - value
E + bar
test_global.py:13: AssertionError
=============================== 1 failed, 1 passed in 0.01 seconds ===============================
Note that you can't use a session-scoped fixture because then you don't have access to each function object. Because of this, I'm making sure to define my big global dictionary once and use references to it -- if I defined the dictionary in that assignment statement, a new copy would be made each time.
In closing, doing anything like this is probably a bad idea. Good luck though :)
You can add your global variable as an option inside the pytest_addoption hook.
It is possible to do it explicitly with addoption or use set_defaults method if you want your attribute be determined without any inspection of the command line, docs
When option was defined, you can paste it inside any fixture with request.config.getoption and then pass it to the test explicitly or with autouse.
Alternatively, you can pass your option into almost any hook inside the config object.
#conftest.py
def pytest_addoption(parser):
parser.addoption("--my_global_var", default="foo")
parser.set_defaults(my_hidden_var="bar")
#pytest.fixture()
def my_hidden_var(request):
return request.config.getoption("my_hidden_var")
#test.py
def test_my_hidden_var(my_hidden_var):
assert my_hidden_var == "bar"

Is it possible to use py.test fixtures in doctest files?

We use py.test in a project and use fixtures for most test cases. But I see no possibility to use fixtures in doctest files.
To give an example with some code snippets: I have a browser fixture in conftest.py like:
#fixture
def browser(request):
from wsgi_intercept import zope_testbrowser
browser = zope_testbrowser.WSGI_Browser()
[...]
return browser
and use it in the file test_browser.txt like:
>>> browser.open('some_url')
>>> browser.url == 'some_url'
True
But I can't see a way to get the fixture into a doctest file. Is this possible at all with py.test?
It isn't supported at the moment. pytest would need to know at collection time which fixtures are going to be used in a doctest. If we can come up with a way to declare which fixtures are going to be used, it shouldn't be hard to add support to _pytest/doctest.py Maybe it's also possible to automatically find out which fixtures a doctest needs, not sure.
There's a pull request attempting to implement this over at the pytest repository. It provides two globals to doctest files, i.e. fixture_request and get_fixture (which is a convenience shortcut for fixture_request.getfuncargvalue). The intended use is:
>>> browser = get_fixture('browser')
>>> browser.open('some_url')
This is different from the .. pytest-fixtures: ... line as suggested by Holger above, but was easier to implement... :) Needless to say, it's up for discussion, of course!

Categories

Resources