I have a fixture that queries a MongoDB database and returns a record.
#pytest.fixture
def chicago():
return GetRecord("CHICAGO,IL")
Each time I pass this fixture into a test, it runs the query again. Is there a best practice on how to make it so I only run this DB call once? I found I can do something like this, but it seems to defeat the purpose of a fixture.
chi = GetRecord("CHICAGO,IL")
#pytest.fixture
def chicago():
return chi
I could just pass in chi instead of declaring a fixture if this is the alternative. Any other solutions?
To have a fixture only evaluated in a certain scope, you can set the fixture scope. As described in the pytest documentation, there are five scopes that affect the context in which a fixture works. Basically a fixture is setup at the begin of the scope, and cleaned up at the end of a scope.
If you need the fixture to be initialized only once over all tests in a session, like in your case, you can use session scope:
#pytest.fixture(scope="session")
def chicago():
return GetRecord("CHICAGO,IL")
This will be evaluated once, as used by the first test, and would be cleaned up, if there were any cleanup code, after all test executions have finished.
Here is a simple example to show this:
import pytest
#pytest.fixture(scope="session")
def the_answer():
print("Calculating...")
yield 6 * 7
print("Answered the question!")
def test_session1(the_answer):
print(the_answer)
def test_session2(the_answer):
print(the_answer)
def test_session3(the_answer):
print(the_answer)
This results in:
$ python -m pytest -s session_based.py
==================== test session starts ====================
...
collected 3 items
session_based.py Calculating...
42
.42
.42
.Answered the question!
==================== 3 passed in 0.32s ====================
As you can see, the "calculation" is done only once before the first test, and the "cleanup" is done after all tests have finished.
Respectively, the rarely used package-scoped fixtures are executed once in a test package, module-scoped tests are executed once in a test module and cleaned up after the tests in that module finish, class-scoped tests are executed once per test class, and function-scoped fixtures (the default) have the lifetime of a single test that uses them.
I'm quite sure that has been answered before, but as I could not find a matching answer, here you are.
Related
I want to create a suite setup for my Pytest framework.
I have created one fixture name suite setup, which return some values for test cases to consume.
#pytest.fixture(scope=session)
def suite_setup(arg1, arg2 ...)
then i have 3 files test1.py, test2.py andtest3.py file where i have multiple test cases, I want my suite setup to be called once and then all the test cases of .py files should execute and then in the end suite_teardown should execute.
I want order like this
Suite_steup
testcase1
testcase2
testcase..n
suite_teardown
can you please help me with the sytax.
if i run -n 2, my session level fixture should get called only 2 times on different env
i have created conftest with fixture but i am not sure how to call session level fixture only once for all test cases and i want to use values returned by session level fixture in all test cases.
i tried calling scope level fixture in test parameter of test cases, but it gets called everytime a new test case starts
A simple way can be to define the fixture as a context manager by using a yield instead of a standard return. This will easily allow you to specify setup and teardown logic.
You may also wish to use autouse=True to force it to always be run even if the fixture is not explicitly specified as a test argument.
In conftest.py you can add something along the lines of this:
#pytest.fixture(scope="session", autouse=True)
def session_fixture():
fixture = do_setup()
yield fixture
do_teardown(fixture)
I have unit tests that require test data. This test data is downloaded and has decently large filesize.
#pytest.fixture
def large_data_file():
large_data = download_some_data('some_token')
return large_data
# Perform tests with input data
def test_foo(large_data_file): pass
def test_bar(large_data_file): pass
def test_baz(large_data_file): pass
# ... and so on
I do not want to download this data more than once. It should only be downloaded once, and passed to all the tests that require it. Does pytest call on large_data_file once and use it for every unit test that uses that fixture, or does it call the large_data_file each time?
In unittest, you would simply download the data for all the tests once in the setUpClass method.
I would rather not just have a global large_data_file = download_some_data('some_token') in this py script. I would like to know how to handle this use-case with Pytest.
Does pytest call on large_data_file once and use it for every unit test that uses that fixture, or does it call the large_data_file each time?
It depends on the fixture scope. The default scope is function, so in your example large_data_file will be evaluated three times. If you broaden the scope, e.g.
#pytest.fixture(scope="session")
def large_data_file():
...
the fixture will be evaluated once per test session and the result will be cached and reused in all dependent tests. Check out the section Scope: sharing fixtures across classes, modules, packages or session in pytest docs for more details.
I want to have a specific setup/tear down fixture for one of the test modules. Obviously, I want it to run the setup code once before all the tests in the module, and once after all tests are done.
So, I've came up with this:
import pytest
#pytest.fixture(scope="module")
def setup_and_teardown():
print("Start")
yield
print("End")
def test_checking():
print("Checking")
assert True
This does not work that way. It will only work if I provide setup_and_teardown as an argument to the first test in the module.
Is this the way it's supposed to work? Isn't it supposed to be run automatically if I mark it as a module level fixture?
Module-scoped fixtures behave the same as fixtures of any other scope - they are only used if they are explicitely passed in a test, marked using #pytest.mark.usefixtures, or have autouse=True set:
#pytest.fixture(scope="module", autouse=True)
def setup_and_teardown():
print("setup")
yield
print("teardown")
For module- and session-scoped fixtures that do the setup/teardown as in your example, this is the most commonly used option.
For fixtures that yield an object (for example an expansive resource that shall only be allocated once) that is accessed in the test, this does not make sense, because the fixture has to be passed to the test to be accessible. Also, it may not be needed in all tests.
I have a handful of tests in my test module that need some common setup and teardown to run before and after the test. I don't need the setup and teardown to run for every function, just a handful of them. I've found I can kind of do this with fixtures
#pytest.fixture
def reset_env():
env = copy.deepcopy(os.environ)
yield None
os.environ = env
def test_that_does_some_env_manipulation(reset_env):
# do some tests
I don't actually need to return anything from the fixture to use in the test function, though, so I really don't need the argument. I'm only using it to trigger setup and teardown.
Is there a way to specify that a test function uses a setup/teardown fixture without needing the fixture argument? Maybe a decorator to say that a test function uses a certain fixture?
Thanks to hoefling's comment above
#pytest.mark.usefixtures('reset_env')
def test_that_does_some_env_manipulation():
# do some tests
You could use autouse=True in your fixture. Autouse automatically executes the fixture at the beginning of fixture scope.
In your code:
#pytest.fixture(autouse=True)
def reset_env():
env = copy.deepcopy(os.environ)
yield None
os.environ = env
def test_that_does_some_env_manipulation():
# do some tests
But you need to be careful about the scope of the fixture as the fixture would be triggered for each scope. If you have all such tests under one directory, you can have it in a conftest file of the directory. Otherwise, you can declare the fixture in the test file.
Relevant pytest help doc
I have multiple tests run by py.test that are located in multiple classes in multiple files.
What is the simplest way to share a large dictionary - which I do not want to duplicate - with every method of every class in every file to be used by py.test?
In short, I need to make a "global variable" for every test. Outside of py.test, I have no use for this variable, so I don't want to store it in the files being tested. I made frequent use of py.test's fixtures, but this seems overkill for this need. Maybe it's the only way?
Update: pytest-namespace hook is deprecated/removed. Do not use. See #3735 for details.
You mention the obvious and least magical option: using a fixture. You can apply it to entire modules using pytestmark = pytest.mark.usefixtures('big_dict') in your module, but then it won't be in your namespace so explicitly requesting it might be best.
Alternatively you can assign things into the pytest namespace using the hook:
# conftest.py
def pytest_namespace():
return {'my_big_dict': {'foo': 'bar'}}
And now you have pytest.my_big_dict. The fixture is probably still nicer though.
There are tons of things I love about py.test, but one thing I absolutely HATE is how poorly it plays with code intelligence tools. I disagree that an autouse fixture to declare a variable is the "most clear" method in this case because not only does it completely baffle my linter, but also anyone else who is not familiar with how py.test works. There is a lot of magic there, imo.
So, one thing you can do that doesn't make your linter explode and doesn't require TestCase boilerplate is to create a module called globals. Inside this module, stub the names of the things you want global to {} or None and import the global module into your tests. Then in your conftest.py file, use the py.test hooks to set (or reset) your global variable(s) as appropriate. This has the advantage of giving you the stub to work with when building tests and the full data for the tests at runtime.
For example, you can use the pytest_configure() hook to set your dict right when py.test starts up. Or, if you wanted to make sure the data was pristine between each test, you could autouse a fixture to assign your global variable to your known state before each test.
# globals.py
my_data = {} # Create a stub for your variable
# test_module.py
import globals as gbl
def test_foo():
assert gbl.my_data['foo'] == 'bar' # The global is in the namespace when creating tests
# conftest.py
import globals as gbl
my_data = {'foo': 'bar'} # Create the master copy in conftest
#pytest.fixture(autouse=True)
def populate_globals():
gbl.my_data = my_data # Assign the master value to the global before each test
One other advantage to this approach is you can use type hinting in your globals module to give you code completion on the global objects in your test, which probably isn't necessary for a dict but I find it handy when I am using an object (such as webdriver). :)
I'm suprised no answer mentioned caching yet: since version 2.8, pytest has a powerful cache mechanism.
Usage example
#pytest.fixture(autouse=True)
def init_cache(request):
data = request.config.cache.get('my_data', None)
data = {'spam': 'eggs'}
request.config.cache.set('my_data', data)
Access the data dict in tests via builtin request fixture:
def test_spam(request):
data = request.config.cache.get('my_data')
assert data['spam'] == 'eggs'
Sharing the data between test runs
The cool thing about request.cache is that it is persisted on disk, so it can be even shared between test runs. This comes handy when you running tests distributed (pytest-xdist) or have some long-running data generation which does not change once generated:
#pytest.fixture(autouse=True)
def generate_data(request):
data = request.config.cache.get('my_data', None)
if data is None:
data = long_running_generation_function()
request.config.cache.set('my_data', data)
Now the tests won't need to recalculate the value on different test runs unless you clear the cache on disk explicitly. Take a look what's currently in the cache:
$ pytest --cache-show
...
my_data contains:
{'spam': 'eggs'}
Rerun the tests with the --cache-clear flag to delete the cache and force the data to be recalculated. Or just remove the .pytest_cache directory in the project root dir.
Where to go from here
The related section in pytest docs: Cache: working with cross-testrun state.
Having a big dictionary of globals that every test uses is probably a bad idea. If possible, I suggest refactoring your tests to avoid this sort of thing.
That said, here is how I would do it: define an autouse fixture that adds a reference to the dictionary in the global namespace of every function.
Here is some code. It's all in the same file, but you can move the fixture out to conftest.py at the top level of your tests.
import pytest
my_big_global = {'key': 'value'}
#pytest.fixture(autouse=True)
def myglobal(request):
request.function.func_globals['foo'] = my_big_global
def test_foo():
assert foo['key'] == 'value'
def test_bar():
assert foo['key'] == 'bar'
Here is the output from when I run this code:
$ py.test test_global.py -vv
======================================= test session starts =======================================
platform darwin -- Python 2.7.5 -- py-1.4.20 -- pytest-2.5.2 -- env/bin/python
collected 2 items
test_global.py:9: test_foo PASSED
test_global.py:12: test_bar FAILED
============================================ FAILURES =============================================
____________________________________________ test_bar _____________________________________________
def test_bar():
> assert foo['key'] == 'bar'
E assert 'value' == 'bar'
E - value
E + bar
test_global.py:13: AssertionError
=============================== 1 failed, 1 passed in 0.01 seconds ===============================
Note that you can't use a session-scoped fixture because then you don't have access to each function object. Because of this, I'm making sure to define my big global dictionary once and use references to it -- if I defined the dictionary in that assignment statement, a new copy would be made each time.
In closing, doing anything like this is probably a bad idea. Good luck though :)
You can add your global variable as an option inside the pytest_addoption hook.
It is possible to do it explicitly with addoption or use set_defaults method if you want your attribute be determined without any inspection of the command line, docs
When option was defined, you can paste it inside any fixture with request.config.getoption and then pass it to the test explicitly or with autouse.
Alternatively, you can pass your option into almost any hook inside the config object.
#conftest.py
def pytest_addoption(parser):
parser.addoption("--my_global_var", default="foo")
parser.set_defaults(my_hidden_var="bar")
#pytest.fixture()
def my_hidden_var(request):
return request.config.getoption("my_hidden_var")
#test.py
def test_my_hidden_var(my_hidden_var):
assert my_hidden_var == "bar"