What is the proper way to test that the pytest fixture itself. Please do not confuse it with using fixture in tests. I want just tests the fixtures correctness by itself.
When trying to call and execute them inside test I am facing:
Fixture "app" called directly. Fixtures are not meant to be called directly
Any input on that will be appreciated. Docs on this topic are not giving me meaningful guidance: https://docs.pytest.org/en/latest/deprecations.html#calling-fixtures-directly
The motivation for testing fixtures itself come to me, because when our test were failing due to bug in fixture this wasn't tracked correctly in our TAP files, what motivated me to test the fixtures stand alone.
pytest has a pytester plugin that was made for the purpose of testing pytest itself and plugins; it executes tests in an isolated run that doesn't affect the current test run. Example:
# conftest.py
import pytest
pytest_plugins = ['pytester']
#pytest.fixture
def spam(request):
yield request.param
The fixture spam has an issue that it will only work with parametrized tests; once it is requested in an unparametrized test, it will raise an AttributeError. This means that we can't test it via a regular test like this:
def test_spam_no_params(spam):
# too late to verify anything - spam already raised in test setup!
# In fact, the body of this test won't be executed at all.
pass
Instead, we execute the test in an isolated test run using the testdir fixture which is provided by the pytester plugin:
import pathlib
import pytest
# an example on how to load the code from the actual test suite
#pytest.fixture
def read_conftest(request):
return pathlib.Path(request.config.rootdir, 'conftest.py').read_text()
def test_spam_fixture(testdir, read_conftest):
# you can create a test suite by providing file contents in different ways, e.g.
testdir.makeconftest(read_conftest)
testdir.makepyfile(
"""
import pytest
#pytest.mark.parametrize('spam', ('eggs', 'bacon'), indirect=True)
def test_spam_parametrized(spam):
assert spam in ['eggs', 'bacon']
def test_spam_no_params(spam):
assert True
""")
result = testdir.runpytest()
# we should have two passed tests and one failed (unarametrized one)
result.assert_outcomes(passed=2, errors=1)
# if we have to, we can analyze the output made by pytest
assert "AttributeError: 'SubRequest' object has no attribute 'param'" in ' '.join(result.outlines)
Another handy possibility of loading test code for the tests is the testdir.copy_example method. Setup the root path in the pytest.ini, for example:
[pytest]
pytester_example_dir = samples_for_fixture_tests
norecursedirs = samples_for_fixture_tests
Now create the file samples_for_fixture_tests/test_spam_fixture/test_x.py with the contents:
import pytest
#pytest.mark.parametrize('spam', ('eggs', 'bacon'), indirect=True)
def test_spam_parametrized(spam):
assert spam in ['eggs', 'bacon']
def test_spam_no_params(spam):
assert True
(it's the same code that was passed as string to testdir.makepyfile before). The above test changes to:
def test_spam_fixture(testdir, read_conftest):
testdir.makeconftest(read_conftest)
# pytest will now copy everything from samples_for_fixture_tests/test_spam_fixture
testdir.copy_example()
testdir.runpytest().assert_outcomes(passed=2, errors=1)
This way, you don't have to maintain Python code as string in tests and can also reuse existing test modules by running them with pytester. You can also configure test data roots via the pytester_example_path mark:
#pytest.mark.pytester_example_path('fizz')
def test_fizz(testdir):
testdir.copy_example('buzz.txt')
will look for the file fizz/buzz.txt relative to the project root dir.
For more examples, definitely check out the section Testing plugins in pytest docs; also, you may find my other answer to the question How can I test if a pytest fixture raises an exception? helpful as it contains yet another working example to the topic. I have also found it very helpful to study the Testdir code directly as sadly pytest doesn't provide an extensive docs for it, but the code is pretty much self-documenting.
Related
I want to have a specific setup/tear down fixture for one of the test modules. Obviously, I want it to run the setup code once before all the tests in the module, and once after all tests are done.
So, I've came up with this:
import pytest
#pytest.fixture(scope="module")
def setup_and_teardown():
print("Start")
yield
print("End")
def test_checking():
print("Checking")
assert True
This does not work that way. It will only work if I provide setup_and_teardown as an argument to the first test in the module.
Is this the way it's supposed to work? Isn't it supposed to be run automatically if I mark it as a module level fixture?
Module-scoped fixtures behave the same as fixtures of any other scope - they are only used if they are explicitely passed in a test, marked using #pytest.mark.usefixtures, or have autouse=True set:
#pytest.fixture(scope="module", autouse=True)
def setup_and_teardown():
print("setup")
yield
print("teardown")
For module- and session-scoped fixtures that do the setup/teardown as in your example, this is the most commonly used option.
For fixtures that yield an object (for example an expansive resource that shall only be allocated once) that is accessed in the test, this does not make sense, because the fixture has to be passed to the test to be accessible. Also, it may not be needed in all tests.
What is the proper way to test that the pytest fixture itself. Please do not confuse it with using fixture in tests. I want just tests the fixtures correctness by itself.
When trying to call and execute them inside test I am facing:
Fixture "app" called directly. Fixtures are not meant to be called directly
Any input on that will be appreciated. Docs on this topic are not giving me meaningful guidance: https://docs.pytest.org/en/latest/deprecations.html#calling-fixtures-directly
The motivation for testing fixtures itself come to me, because when our test were failing due to bug in fixture this wasn't tracked correctly in our TAP files, what motivated me to test the fixtures stand alone.
pytest has a pytester plugin that was made for the purpose of testing pytest itself and plugins; it executes tests in an isolated run that doesn't affect the current test run. Example:
# conftest.py
import pytest
pytest_plugins = ['pytester']
#pytest.fixture
def spam(request):
yield request.param
The fixture spam has an issue that it will only work with parametrized tests; once it is requested in an unparametrized test, it will raise an AttributeError. This means that we can't test it via a regular test like this:
def test_spam_no_params(spam):
# too late to verify anything - spam already raised in test setup!
# In fact, the body of this test won't be executed at all.
pass
Instead, we execute the test in an isolated test run using the testdir fixture which is provided by the pytester plugin:
import pathlib
import pytest
# an example on how to load the code from the actual test suite
#pytest.fixture
def read_conftest(request):
return pathlib.Path(request.config.rootdir, 'conftest.py').read_text()
def test_spam_fixture(testdir, read_conftest):
# you can create a test suite by providing file contents in different ways, e.g.
testdir.makeconftest(read_conftest)
testdir.makepyfile(
"""
import pytest
#pytest.mark.parametrize('spam', ('eggs', 'bacon'), indirect=True)
def test_spam_parametrized(spam):
assert spam in ['eggs', 'bacon']
def test_spam_no_params(spam):
assert True
""")
result = testdir.runpytest()
# we should have two passed tests and one failed (unarametrized one)
result.assert_outcomes(passed=2, errors=1)
# if we have to, we can analyze the output made by pytest
assert "AttributeError: 'SubRequest' object has no attribute 'param'" in ' '.join(result.outlines)
Another handy possibility of loading test code for the tests is the testdir.copy_example method. Setup the root path in the pytest.ini, for example:
[pytest]
pytester_example_dir = samples_for_fixture_tests
norecursedirs = samples_for_fixture_tests
Now create the file samples_for_fixture_tests/test_spam_fixture/test_x.py with the contents:
import pytest
#pytest.mark.parametrize('spam', ('eggs', 'bacon'), indirect=True)
def test_spam_parametrized(spam):
assert spam in ['eggs', 'bacon']
def test_spam_no_params(spam):
assert True
(it's the same code that was passed as string to testdir.makepyfile before). The above test changes to:
def test_spam_fixture(testdir, read_conftest):
testdir.makeconftest(read_conftest)
# pytest will now copy everything from samples_for_fixture_tests/test_spam_fixture
testdir.copy_example()
testdir.runpytest().assert_outcomes(passed=2, errors=1)
This way, you don't have to maintain Python code as string in tests and can also reuse existing test modules by running them with pytester. You can also configure test data roots via the pytester_example_path mark:
#pytest.mark.pytester_example_path('fizz')
def test_fizz(testdir):
testdir.copy_example('buzz.txt')
will look for the file fizz/buzz.txt relative to the project root dir.
For more examples, definitely check out the section Testing plugins in pytest docs; also, you may find my other answer to the question How can I test if a pytest fixture raises an exception? helpful as it contains yet another working example to the topic. I have also found it very helpful to study the Testdir code directly as sadly pytest doesn't provide an extensive docs for it, but the code is pretty much self-documenting.
I have a handful of tests in my test module that need some common setup and teardown to run before and after the test. I don't need the setup and teardown to run for every function, just a handful of them. I've found I can kind of do this with fixtures
#pytest.fixture
def reset_env():
env = copy.deepcopy(os.environ)
yield None
os.environ = env
def test_that_does_some_env_manipulation(reset_env):
# do some tests
I don't actually need to return anything from the fixture to use in the test function, though, so I really don't need the argument. I'm only using it to trigger setup and teardown.
Is there a way to specify that a test function uses a setup/teardown fixture without needing the fixture argument? Maybe a decorator to say that a test function uses a certain fixture?
Thanks to hoefling's comment above
#pytest.mark.usefixtures('reset_env')
def test_that_does_some_env_manipulation():
# do some tests
You could use autouse=True in your fixture. Autouse automatically executes the fixture at the beginning of fixture scope.
In your code:
#pytest.fixture(autouse=True)
def reset_env():
env = copy.deepcopy(os.environ)
yield None
os.environ = env
def test_that_does_some_env_manipulation():
# do some tests
But you need to be careful about the scope of the fixture as the fixture would be triggered for each scope. If you have all such tests under one directory, you can have it in a conftest file of the directory. Otherwise, you can declare the fixture in the test file.
Relevant pytest help doc
We defined all our custom assertions in a separate python file which is not a test module.
For example:
custom_asserts.py
class CustomAsserts(object):
def silly_assert(self, foo, bar):
assert foo == bar , 'some error message'
If we use assert directly in tests, we will get extra info about the AssertionError, which is very useful.
Output of directly use assert in tests:
> assert 'foo' == 'bar', 'some error message'
E AssertionError: some error message
E assert 'foo' == 'bar'
E - foo
E + bar
But we found that if we call the assertion method we defined in separate module, extra info won't show.
from custom_asserts import CustomAsserts
asserts = CustomAsserts()
def test_silly():
asserts.silly_assert('foo', 'bar')
Output after running the test:
> assert 'foo' == 'bar', 'some error message'
E AssertionError: some error message
And we also found this in pytest docs: Advanced assertion introspection
pytest only rewrites test modules directly discovered by its test
collection process, so asserts in supporting modules which are not
themselves test modules will not be rewritten.
So my question is, is there a way to let pytest do the same assert rewriting to other modules just like test modules? Or is there any hacky way to achieve that?
Update:
Pytest 3.0 introduced a new method register_assert_rewrite to implement this exact feature. If you are using pytest 3.0 or later, please try this. register_assert_rewrite
Original answer:
It's kind of wired to answer my own question but I think I found the solution and want to share.
The trick is in how pytest collect test modules. We can define python_files in pytest.ini so that pytest will consider more modules as test modules.
For example, in my case, all my custom asserts module ends with 'asserts', so my pytest.ini is:
[pytest]
python_files = *asserts.py test_*.py *_test.py
Another tricky thing is in conftest.py. It seems we have to avoid import the asserts module in conftest.py. My assumption is that it looks like the technology pytest uses to rewrite assert is actually rewrite .pyc file, and since conftest.py is loaded before collecting, if we import the asserts module, .pyc of the module would be generated before collecting, which may make pytest unable to rewrite the .pyc file again.
So in my conftest.py, I have to do thing like:
#pytest.fixture(autouse=Ture)
def setup_asserts(request):
from custom_asserts import CustomAsserts
request.instance.asserts = CustomAsserts()
And I will get the extra error info just like using keyword assert directly in test script.
I have multiple tests run by py.test that are located in multiple classes in multiple files.
What is the simplest way to share a large dictionary - which I do not want to duplicate - with every method of every class in every file to be used by py.test?
In short, I need to make a "global variable" for every test. Outside of py.test, I have no use for this variable, so I don't want to store it in the files being tested. I made frequent use of py.test's fixtures, but this seems overkill for this need. Maybe it's the only way?
Update: pytest-namespace hook is deprecated/removed. Do not use. See #3735 for details.
You mention the obvious and least magical option: using a fixture. You can apply it to entire modules using pytestmark = pytest.mark.usefixtures('big_dict') in your module, but then it won't be in your namespace so explicitly requesting it might be best.
Alternatively you can assign things into the pytest namespace using the hook:
# conftest.py
def pytest_namespace():
return {'my_big_dict': {'foo': 'bar'}}
And now you have pytest.my_big_dict. The fixture is probably still nicer though.
There are tons of things I love about py.test, but one thing I absolutely HATE is how poorly it plays with code intelligence tools. I disagree that an autouse fixture to declare a variable is the "most clear" method in this case because not only does it completely baffle my linter, but also anyone else who is not familiar with how py.test works. There is a lot of magic there, imo.
So, one thing you can do that doesn't make your linter explode and doesn't require TestCase boilerplate is to create a module called globals. Inside this module, stub the names of the things you want global to {} or None and import the global module into your tests. Then in your conftest.py file, use the py.test hooks to set (or reset) your global variable(s) as appropriate. This has the advantage of giving you the stub to work with when building tests and the full data for the tests at runtime.
For example, you can use the pytest_configure() hook to set your dict right when py.test starts up. Or, if you wanted to make sure the data was pristine between each test, you could autouse a fixture to assign your global variable to your known state before each test.
# globals.py
my_data = {} # Create a stub for your variable
# test_module.py
import globals as gbl
def test_foo():
assert gbl.my_data['foo'] == 'bar' # The global is in the namespace when creating tests
# conftest.py
import globals as gbl
my_data = {'foo': 'bar'} # Create the master copy in conftest
#pytest.fixture(autouse=True)
def populate_globals():
gbl.my_data = my_data # Assign the master value to the global before each test
One other advantage to this approach is you can use type hinting in your globals module to give you code completion on the global objects in your test, which probably isn't necessary for a dict but I find it handy when I am using an object (such as webdriver). :)
I'm suprised no answer mentioned caching yet: since version 2.8, pytest has a powerful cache mechanism.
Usage example
#pytest.fixture(autouse=True)
def init_cache(request):
data = request.config.cache.get('my_data', None)
data = {'spam': 'eggs'}
request.config.cache.set('my_data', data)
Access the data dict in tests via builtin request fixture:
def test_spam(request):
data = request.config.cache.get('my_data')
assert data['spam'] == 'eggs'
Sharing the data between test runs
The cool thing about request.cache is that it is persisted on disk, so it can be even shared between test runs. This comes handy when you running tests distributed (pytest-xdist) or have some long-running data generation which does not change once generated:
#pytest.fixture(autouse=True)
def generate_data(request):
data = request.config.cache.get('my_data', None)
if data is None:
data = long_running_generation_function()
request.config.cache.set('my_data', data)
Now the tests won't need to recalculate the value on different test runs unless you clear the cache on disk explicitly. Take a look what's currently in the cache:
$ pytest --cache-show
...
my_data contains:
{'spam': 'eggs'}
Rerun the tests with the --cache-clear flag to delete the cache and force the data to be recalculated. Or just remove the .pytest_cache directory in the project root dir.
Where to go from here
The related section in pytest docs: Cache: working with cross-testrun state.
Having a big dictionary of globals that every test uses is probably a bad idea. If possible, I suggest refactoring your tests to avoid this sort of thing.
That said, here is how I would do it: define an autouse fixture that adds a reference to the dictionary in the global namespace of every function.
Here is some code. It's all in the same file, but you can move the fixture out to conftest.py at the top level of your tests.
import pytest
my_big_global = {'key': 'value'}
#pytest.fixture(autouse=True)
def myglobal(request):
request.function.func_globals['foo'] = my_big_global
def test_foo():
assert foo['key'] == 'value'
def test_bar():
assert foo['key'] == 'bar'
Here is the output from when I run this code:
$ py.test test_global.py -vv
======================================= test session starts =======================================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2 -- env/bin/python
collected 2 items
test_global.py:9: test_foo PASSED
test_global.py:12: test_bar FAILED
============================================ FAILURES =============================================
____________________________________________ test_bar _____________________________________________
def test_bar():
> assert foo['key'] == 'bar'
E assert 'value' == 'bar'
E - value
E + bar
test_global.py:13: AssertionError
=============================== 1 failed, 1 passed in 0.01 seconds ===============================
Note that you can't use a session-scoped fixture because then you don't have access to each function object. Because of this, I'm making sure to define my big global dictionary once and use references to it -- if I defined the dictionary in that assignment statement, a new copy would be made each time.
In closing, doing anything like this is probably a bad idea. Good luck though :)
You can add your global variable as an option inside the pytest_addoption hook.
It is possible to do it explicitly with addoption or use set_defaults method if you want your attribute be determined without any inspection of the command line, docs
When option was defined, you can paste it inside any fixture with request.config.getoption and then pass it to the test explicitly or with autouse.
Alternatively, you can pass your option into almost any hook inside the config object.
#conftest.py
def pytest_addoption(parser):
parser.addoption("--my_global_var", default="foo")
parser.set_defaults(my_hidden_var="bar")
#pytest.fixture()
def my_hidden_var(request):
return request.config.getoption("my_hidden_var")
#test.py
def test_my_hidden_var(my_hidden_var):
assert my_hidden_var == "bar"