I use pytest quite a bit for my code. Sample code structure looks like this. The entire codebase is python-2.7
core/__init__.py
core/utils.py
#feature
core/feature/__init__.py
core/feature/service.py
#tests
core/feature/tests/__init__.py
core/feature/tests/test1.py
core/feature/tests/test2.py
core/feature/tests/test3.py
core/feature/tests/test4.py
core/feature/tests/test10.py
The service.py looks something like this:
from modules import stuff
from core.utils import Utility
class FeatureManager:
# lots of other methods
def execute(self, *args, **kwargs):
self._execute_step1(*args, **kwargs)
# some more code
self._execute_step2(*args, **kwargs)
utility = Utility()
utility.doThings(args[0], kwargs['variable'])
All the tests in feature/tests/* end up using core.feature.service.FeatureManager.execute function. However utility.doThings() is not necessary for me to be run while I am running tests. I need it to happen while the production application runs but I do not want it to happen while the tests are being run.
I can do something like this in my core/feature/tests/test1.py
from mock import patch
class Test1:
def test_1():
with patch('core.feature.service.Utility') as MockedUtils:
exectute_test_case_1()
This would work. However I added Utility just now to the code base and I have more than 300 test cases. I would not want to go into each test case and write this with statement.
I could write a conftest.py which sets a os level environment variable based on which the core.feature.service.FeatureManager.execute could decide to not execute the utility.doThings but I do not know if that is a clean solution to this issue.
I would appreciate if someone could help me with global patches to the entire session. I would like to do what I did with the with block above globally for the entire session. Any articles in this matter would be great too.
TLDR: How do I create session wide patches while running pytests?
I added a file called core/feature/conftest.py that looks like this
import logging
import pytest
#pytest.fixture(scope="session", autouse=True)
def default_session_fixture(request):
"""
:type request: _pytest.python.SubRequest
:return:
"""
log.info("Patching core.feature.service")
patched = mock.patch('core.feature.service.Utility')
patched.__enter__()
def unpatch():
patched.__exit__()
log.info("Patching complete. Unpatching")
request.addfinalizer(unpatch)
This is nothing complicated. It is like doing
with mock.patch('core.feature.service.Utility') as patched:
do_things()
but only in a session-wide manner.
Building on the currently accepted answer for a similar use case (4.5 years later), using unittest.mock's patch with a yield also worked:
from typing import Iterator
from unittest.mock import patch
import pytest
#pytest.fixture(scope="session", autouse=True)
def default_session_fixture() -> Iterator[None]:
log.info("Patching core.feature.service")
with patch("core.feature.service.Utility"):
yield
log.info("Patching complete. Unpatching")
Aside
For this answer I utilized autouse=True. In my actual use case, to integrate into my unit tests on a test-by-test basis, I utilized #pytest.mark.usefixtures("default_session_fixture").
Versions
Python==3.8.6
pytest==6.2.2
Related
I am using pytest to test a project I am working on. I have a project structure like
|my_project
||__init__.py
||my_code.py
||test.py
and test.py looks like
# contents of test.py
import .my_code
def test_function():
...
...
To run the tests, I can just run python -m pytest from this directory. So far so good.
But for the code to run remotely, I have to use Pants to build a virtual environment so that the import statement will actually look like:
import long.path.to.my_project.my_code as my_code
I want to make sure that the code still works in this virtual environment as well, so right now I have a different file called test_venv.py with identical tests, where the only difference is the import.
# contents of test_venv.py
import long.path.to.my_project.my_code as my_code
def test_function():
...
...
This does work, but it's very annoying to have two almost identical test files. Is there a way to make the import statement parameterized, so that I can tell pytest where I want to import from when I run the test?
You can combine imports. Do one way, catch exception, do the other way.
try:
import .my_code
except ImportError:
import long.path.to.my_project.my_code as my_code
def test_function():
...
...
I'm not sure, may be it's should be
try:
import long.path.to.my_project.my_code as my_code
except ImportError:
import .my_code
After playing around with #morhc's suggestion to use this, I figured out a way. It involves using parameterized fixtures and importlib. I set up the fixture as follows.
#pytest.fixture(scope='module', params=['local', 'installed'])
def my_code_module(request):
if request.param == 'local':
return importlib.import_module("my_code")
if request.param == 'installed':
return importlib.import_module("long.path.to.my_project.my_code")
Then write the tests to request the fixture as follows.
def test_code(my_code_module):
assert my_code_module.whatever() == ...
In pytest, my testing script compares the calculated results with baseline results which are loaded via
SCRIPTLOC = os.path.dirname(__file__)
TESTBASELINE = os.path.join(SCRIPTLOC, 'baseline', 'baseline.csv')
baseline = pandas.DataFrame.from_csv(TESTBASELINE)
Is there a non-boilerplate way to tell pytest to start looking from the root directory of the script rather than get the absolute location through SCRIPTLOC?
If you are simply looking for the pytest equivalent of using __file__, you can add the request fixture to your test and use request.fspath
From the docs:
class FixtureRequest
...
fspath
the file system path of the test module which collected this test.
So an example might look like:
def test_script_loc(request):
baseline = os.path.join(request.fspath.dirname, 'baseline', 'baseline.cvs')
print(baseline)
If you're wanting to avoid boilerplate though, you're not going to gain much from doing this (assuming I understand what you mean by 'non-boilerplate')
Personally, I think using the fixture is more explicit (within pytest idioms), but I prefer to wrap the request manipulation in another fixture so I know that I'm specifically grabbing sample test data just by looking at the method signature of a test.
Here's a snippet I use (modified to match your question, I use a subdirectory hierarchy):
# in conftest.py
import pytest
#pytest.fixture(scope="module")
def script_loc(request):
'''Return the directory of the currently running test script'''
# uses .join instead of .dirname so we get a LocalPath object instead of
# a string. LocalPath.join calls normpath for us when joining the path
return request.fspath.join('..')
And sample usage
def test_script_loc(script_loc):
baseline = script_loc.join('baseline/baseline.cvs')
print(baseline)
Update: pathlib
Given that I've long since stopped supporting legacy Python and instead use pathlib, I no longer return LocalPath objects nor depend on their API.
The pytest team is also planning to eventually deprecate py.path and port their internals to the standard library pathlib. This started as early as pytest 3.9.0 with the introduction of tmp_path, though the actual removal of LocalPath attributes may not happen for some time.
Though the pytest team will likely add an alternative attribute (e.g. request.fs_path) that returns a Path object, it's easy enough to convert the LocalPath to a Path ourselves for now.
Here's a variant of the above example using a Path object, tweak as it suits you:
# in conftest.py
import pytest
from pathlib import Path
#pytest.fixture(scope="module")
def script_loc(request):
'''Return the directory of the currently running test script'''
return Path(request.fspath).parent
And sample usage
def test_script_loc(script_loc):
baseline = script_loc.joinpath('baseline/baseline.cvs')
print(baseline)
I am writing functional tests using pytest for a software that can run locally and in the cloud. I want to create 2 modules, each with the same module/fixture names, and have pytest load one or the other depending if I'm running tests locally or in the cloud:
/fixtures
/fixtures/__init__.py
/fixtures/local_hybrids
/fixtures/local_hybrids/__init__.py
/fixtures/local_hybrids/foo.py
/fixtures/cloud_hybrids
/fixtures/cloud_hybrids/__init__.py
/fixtures/cloud_hybrids/foo.py
/test_hybrids/test_hybrids.py
foo.py (both of them):
import pytest
#pytest.fixture()
def my_fixture():
return True
/fixtures/__init__.py:
if True:
import local_hybrids as hybrids
else:
import cloud_hybrids as hybrids
/test_hybrids/test_hybrids.py:
from fixtures.hybrids.foo import my_fixture
def test_hybrid(my_fixture):
assert my_fixture
The last code block doesn't work of course, because import fixtures.hybrids is looking at the file system instead of __init__.py's "fake" namespace, which isn't like from fixtures import hybrids, which works (but then you cannot use the fixtures as the names would involve dot notation).
I realize that I could play with pytest_generate_test to alter the fixture dynamically (maybe?) but I'd really hate managing each fixture manually from within that function... I was hoping the dynamic import (if x, import this, else import that) was standard Python, unfortunately it clashes with the fixtures mechanism:
import fixtures
def test(fixtures.hybrids.my_fixture): # of course it doesn't work :)
...
I could also import each fixture function one after the other in init; more legwork, but still a viable option to fool pytest and get fixture names without dots.
Show me the black magic. :) Can it be done?
I think in your case it's better to define a fixture - environment or other nice name.
This fixture can be just a getter from os.environ['KEY'] or you can add custom command line argument like here
then use it like here
and the final use is here.
What im trying to tell is that you need to switch thinking into dependency injection: everything should be a fixture. In your case (and in my plugin as well), runtime environment should be a fixture, which is checked in all other fixtures which depend on the environment.
You might be missing something here: If you want to re-use those fixtures you need to say it explicitly:
from fixtures.hybrids.foo import my_fixture
#pytest.mark.usefixtures('my_fixture')
def test_hybrid(my_fixture):
assert my_fixture
In that case you could tweak pytest as following:
from local_hybrids import local_hybrids_fixture
from cloud_hybrids import cloud_hybrids_fixture
fixtures_to_test = {
"local":None,
"cloud":None
}
#pytest.mark.usefixtures("local_hybrids_fixture")
def test_add_local_fixture(local_hybrids_fixture):
fixtures_to_test["local"] = local_hybrids_fixture
#pytest.mark.usefixtures("cloud_hybrids_fixture")
def test_add_local_fixture(cloud_hybrids_fixture):
fixtures_to_test["cloud"] = cloud_hybrids_fixture
def test_on_fixtures():
if cloud_enabled:
fixture = fixtures_to_test["cloud"]
else:
fixture = fixtures_to_test["local"]
...
If there are better solutions around I am also interested ;)
I don't really think there is a "good way" of doing that in python, but still it is possible with a little amount of hacking. You can update sys.path for the subfolder with fixtures you would like to use and import fixtures directly. In dirty case it look's like that:
for your fixtures/__init__.py:
if True:
import local as hybrids
else:
import cloud as hybrids
def update_path(module):
from sys import path
from os.path import join, pardir, abspath
mod_dir = abspath(join(module.__file__, pardir))
path.insert(0, mod_dir)
update_path(hybrids)
and in the client code (test_hybrids/test_hybrids.py) :
import fixtures
from foo import spam
spam()
In other cases you can use much more complex actions to perform a fake-move of all modules/packages/functions etc from your cloud/local folder directly into the fixture's __init__.py. Still, I think - it does not worth a try.
One more thing - black magic is not the best thing to use, I would recommend you to use a dotted notation with "import X from Y" - this is much more stable solution.
Use the pytest plugins feature and put your fixtures in separate modules. Then at runtime select which plug-in you’ll be drawing from via a command line argument or an environment variable. It needs to be something global because you need to place different pytest_plugins list assignments based on the global value.
Take a look at the section Conditional Plugins from this repo https://github.com/jxramos/pytest_behavior/tree/main/conditional_plugins
I'd like to connect to a different database if my code is running under py.test. Is there a function to call or an environment variable that I can test that will tell me if I'm running under a py.test session? What's the best way to handle this?
A simpler solution I came to:
import sys
if "pytest" in sys.modules:
...
Pytest runner will always load the pytest module, making it available in sys.modules.
Of course, this solution only works if the code you're trying to test does not use pytest itself.
There's also another way documented in the manual:
https://docs.pytest.org/en/latest/example/simple.html#pytest-current-test-environment-variable
Pytest will set the following environment variable PYTEST_CURRENT_TEST.
Checking the existence of said variable should reliably allow one to detect if code is being executed from within the umbrella of pytest.
import os
if "PYTEST_CURRENT_TEST" in os.environ:
# We are running under pytest, act accordingly...
Note
This method works only when an actual test is being run.
This detection will not work when modules are imported during pytest collection.
A solution came from RTFM, although not in an obvious place. The manual also had an error in code, corrected below.
Detect if running from within a pytest run
Usually it is a bad idea to make application code behave differently
if called from a test. But if you absolutely must find out if your
application code is running from a test you can do something like
this:
# content of conftest.py
def pytest_configure(config):
import sys
sys._called_from_test = True
def pytest_unconfigure(config):
import sys # This was missing from the manual
del sys._called_from_test
and then check for the sys._called_from_test flag:
if hasattr(sys, '_called_from_test'):
# called from within a test run
else:
# called "normally"
accordingly in your application. It’s also a good idea to use your own
application module rather than sys for handling flag.
Working with pytest==4.3.1 the methods above failed, so I just went old school and checked with:
script_name = os.path.basename(sys.argv[0])
if script_name in ['pytest', 'py.test']:
print('Running with pytest!')
While the hack explained in the other answer (http://pytest.org/latest/example/simple.html#detect-if-running-from-within-a-pytest-run) does indeed work, you could probably design the code in such a way you would not need to do this.
If you design the code to take the database to connect to as an argument somehow, via a connection or something else, then you can simply inject a different argument when you're running the tests then when the application drives this. Your code will end up with less global state and more modulare and reusable. So to me it sounds like an example where testing drives you to design the code better.
This could be done by setting an environment variable inside the testing code. For example, given a project
conftest.py
mypkg/
__init__.py
app.py
tests/
test_app.py
In test_app.py you can add
import os
os.environ['PYTEST_RUNNING'] = 'true'
And then you can check inside app.py:
import os
if os.environ.get('PYTEST_RUNNING', '') == 'true':
print('pytest is running')
Summary: when a certain python module is imported, I want to be able to intercept this action, and instead of loading the required class, I want to load another class of my choice.
Reason: I am working on some legacy code. I need to write some unit test code before I start some enhancement/refactoring. The code imports a certain module which will fail in a unit test setting, however. (Because of database server dependency)
Pseduo Code:
from LegacyDataLoader import load_me_data
...
def do_something():
data = load_me_data()
So, ideally, when python excutes the import line above in a unit test, an alternative class, says MockDataLoader, is loaded instead.
I am still using 2.4.3. I suppose there is an import hook I can manipulate
Edit
Thanks a lot for the answers so far. They are all very helpful.
One particular type of suggestion is about manipulation of PYTHONPATH. It does not work in my case. So I will elaborate my particular situation here.
The original codebase is organised in this way
./dir1/myapp/database/LegacyDataLoader.py
./dir1/myapp/database/Other.py
./dir1/myapp/database/__init__.py
./dir1/myapp/__init__.py
My goal is to enhance the Other class in the Other module. But since it is legacy code, I do not feel comfortable working on it without strapping a test suite around it first.
Now I introduce this unit test code
./unit_test/test.py
The content is simply:
from myapp.database.Other import Other
def test1():
o = Other()
o.do_something()
if __name__ == "__main__":
test1()
When the CI server runs the above test, the test fails. It is because class Other uses LegacyDataLoader, and LegacydataLoader cannot establish database connection to the db server from the CI box.
Now let's add a fake class as suggested:
./unit_test_fake/myapp/database/LegacyDataLoader.py
./unit_test_fake/myapp/database/__init__.py
./unit_test_fake/myapp/__init__.py
Modify the PYTHONPATH to
export PYTHONPATH=unit_test_fake:dir1:unit_test
Now the test fails for another reason
File "unit_test/test.py", line 1, in <module>
from myapp.database.Other import Other
ImportError: No module named Other
It has something to do with the way python resolves classes/attributes in a module
You can intercept import and from ... import statements by defining your own __import__ function and assigning it to __builtin__.__import__ (make sure to save the previous value, since your override will no doubt want to delegate to it; and you'll need to import __builtin__ to get the builtin-objects module).
For example (Py2.4 specific, since that's what you're asking about), save in aim.py the following:
import __builtin__
realimp = __builtin__.__import__
def my_import(name, globals={}, locals={}, fromlist=[]):
print 'importing', name, fromlist
return realimp(name, globals, locals, fromlist)
__builtin__.__import__ = my_import
from os import path
and now:
$ python2.4 aim.py
importing os ('path',)
So this lets you intercept any specific import request you want, and alter the imported module[s] as you wish before you return them -- see the specs here. This is the kind of "hook" you're looking for, right?
There are cleaner ways to do this, but I'll assume that you can't modify the file containing from LegacyDataLoader import load_me_data.
The simplest thing to do is probably to create a new directory called testing_shims, and create LegacyDataLoader.py file in it. In that file, define whatever fake load_me_data you like. When running the unit tests, put testing_shims into your PYTHONPATH environment variable as the first directory. Alternately, you can modify your test runner to insert testing_shims as the first value in sys.path.
This way, your file will be found when importing LegacyDataLoader, and your code will be loaded instead of the real code.
The import statement just grabs stuff from sys.modules if a matching name is found there, so the simplest thing is to make sure you insert your own module into sys.modules under the target name before anything else tries to import the real thing.
# in test code
import sys
import MockDataLoader
sys.modules['LegacyDataLoader'] = MockDataLoader
import module_under_test
There are a handful of variations on the theme, but that basic approach should work fine to do what you describe in the question. A slightly simpler approach would be this, using just a mock function to replace the one in question:
# in test code
import module_under_test
def mock_load_me_data():
# do mock stuff here
module_under_test.load_me_data = mock_load_me_data
That simply replaces the appropriate name right in the module itself, so when you invoke the code under test, presumably do_something() in your question, it calls your mock routine.
Well, if the import fails by raising an exception, you could put it in a try...except loop:
try:
from LegacyDataLoader import load_me_data
except: # put error that occurs here, so as not to mask actual problems
from MockDataLoader import load_me_data
Is that what you're looking for? If it fails, but doesn't raise an exception, you could have it run the unit test with a special command line tag, like --unittest, like this:
import sys
if "--unittest" in sys.argv:
from MockDataLoader import load_me_data
else:
from LegacyDataLoader import load_me_data