How to test python class which calls a missing module? - python

I try to test python class with pytest.
I have python module like so:
import module_A
from module_b import module_c
class classToTest():
def foo():
return module_c.func() + 1
def bar():
return module_A.func() + 2
So, how to properly test bar() function if I don’t have any modules imported in the instance above?
I decided to mock patch module_A.func(), but patched object should be imported before.
I’ve created a fake module_A locally and do so:
import module_A
from mock import patch
#patch(‘module_A.func’, return_value=10)
def test_bar()
MyClass = ClassToTest()
assert MyClass.bar() == 12
I have an import error because I don’t have module_b installed. How to solve it? Probably it’s beter to reorganize the code to make it more testable or I should use another approaches to build a good test?

something like this may help, but you have to position it before your import of whatever holds classToTest.
import sys
try:
import module_b
except (ImportError,) as e:
from unittest.mock import Mock
module_b = Mock()
module_b.module_c = module_b
def func():
return 10
module_b.func = func
sys.modules["module_b"] = module_b
#import your code under test
With self.assertEqual(classToTest().foo(), 12) I got an asserterror that 11 <> 12, it wasn't happy with the numbers not matching, so the fake module_b was accepted.
Please note, I was only playing around with faking out a module_b import. The whole test structure probably has issues, as Code-Apprentice pointed out.
2nd warning: I would not do this to somehow fake out an installed module_b not being installed. system.modules, once it has that mock in it registered, will need tidying up. reload ought to do it, but who knows? in that case, the usual mock/patch might yield better results.

Related

Annotate a function argument as being a specific module

I have a pytest fixture that imports a specific module. This is needed as importing the module is very expensive, so we don't want to do it on import-time (i.e. during pytest test collection). This results in code like this:
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
def test_something(my_module_fix):
assert my_module_fix.my_func() = 5
I am using PyCharm and would like to have type-checking and autocompletion in my tests. To achieve that, I would somehow have to annotate the my_module_fix parameter as having the type of the my_module module.
I have no idea how to achieve that. All I found is that I can annotate my_module_fix as being of type types.ModuleType, but that is not enough: It is not any module, it is always my_module.
If I get your question, you have two (or three) separate goals
Deferred import of slowmodule
Autocomplete to continue to work as if it was a standard import
(Potentially?) typing (e.g. mypy?) to continue to work
I can think of at least five different approaches, though I'll only briefly mention the last because it's insane.
Import the module inside your tests
This is (by far) the most common and IMHO preferred solution.
e.g. instead of
import slowmodule
def test_foo():
slowmodule.foo()
def test_bar():
slowmodule.bar()
you'd write:
def test_foo():
import slowmodule
slowmodule.foo()
def test_bar():
import slowmodule
slowmodule.bar()
[deferred importing] Here, the module will be imported on-demand/lazily. So if you have pytest setup to fail-fast, and another test fails before pytest gets to your (test_foo, test_bar) tests, the module will never be imported and you'll never incur the runtime cost.
Because of Python's module cache, subsequent import statements won't actually re-import the module, just grab a reference to the already-imported module.
[autocomplete/typing] Of course, autocomplete will continue to work as you expect in this case. This is a perfectly fine import pattern.
While it does require adding potentially many additional import statements (one inside each test function), it's immediately clear what is going on (regardless of whether it's clear why it's going on).
[3.7+] Proxy your module with module __getattr__
If you create a module (e.g. slowmodule_proxy.py) with the contents like:
def __getattr__(name):
import slowmodule
return getattr(slowmodule, name)
And in your tests, e.g.
import slowmodule
def test_foo():
slowmodule.foo()
def test_bar():
slowmodule.bar()
instead of:
import slowmodule
you write:
import slowmodule_proxy as slowmodule
[deferred import] Thanks to PEP-562, you can "request" any name from slowmodule_proxy and it will fetch and return the corresponding name from slowmodule. Just as above, including the import inside the function will cause slowmodule to be imported only when the function is called and executed instead of on module load. Module caching still applies here of course, so you're only incurring the import penalty once per interpreter session.
[autocomplete] However, while deferred importing will work (and your tests run without issue), this approach (as stated so far) will "break" autocomplete:
Now we're in the realm of PyCharm. Some IDEs will perform "live" analysis of modules and actually load up the module and inspect its members. (PyDev had this option). If PyCharm did this, implementing module.__dir__ (same PEP) or __all__ would allow your proxy module to masquerade as the actual slowmodule and autocomplete would work.† But, PyCharm does not do this.
Nonetheless, you can fool PyCharm into giving you autocomplete suggestions:
if False:
import slowmodule
else:
import slowmodule_proxy as slowmodule
The interpreter will only execute the else branch, importing the proxy and naming it slowmodule (so your test code can continue to reference slowmodule unchanged).
But PyCharm will now provide autocompletion for the underlying module:
† While live-analysis can be an incredibly helpful, there's also a (potential) security concern that comes with it that static syntax analysis doesn't have. And the maturation of type hinting and stub files has made it less of an issue still.
Proxy slowmodule explicitly
If you really hated the dynamic proxy approach (or the fact that you have to fool PyCharm in this way), you could proxy the module explicitly.
(You'd likely only want to consider this if the slowmodule API is stable.)
If slowmodule has methods foo and bar you'd create a proxy module like:
def foo(*args, **kwargs):
import slowmodule
return slowmodule.foo(*args, **kwargs)
def bar(*args, **kwargs):
import slowmodule
return slowmodule.bar(*args, **kwargs)
(Using args and kwargs to pass arguments through to the underlying callables. And you could add type hinting to these functions to mirror the slowmodule functions.)
And in your test,
import slowmodule_proxy as slowmodule
Same as before. Importing inside the method gives you the deferred importing you want and the module cache takes care of multiple import calls.
And since it's a real module whose contents can be statically analyzed, there's no need to "fool" PyCharm.
So the benefit of this solution is that you don't have a bizarre looking if False in your test imports. This, however, comes at the (substantial) cost of having to maintain a proxy file alongside your module -- which could prove painful in the case that slowmodule's API wasn't stable.
[3.5+] Use importlib's LazyLoader instead of a proxy module
Instead of the proxy module slowmodule_proxy, you could follow a pattern similar to the one shown in the importlib docs
>>> import importlib.util
>>> import sys
>>> def lazy_import(name):
... spec = importlib.util.find_spec(name)
... loader = importlib.util.LazyLoader(spec.loader)
... spec.loader = loader
... module = importlib.util.module_from_spec(spec)
... sys.modules[name] = module
... loader.exec_module(module)
... return module
...
>>> lazy_typing = lazy_import("typing")
>>> #lazy_typing is a real module object,
>>> #but it is not loaded in memory yet.
You'd still need to fool PyCharm though, so something like:
if False:
import slowmodule
else:
slowmodule = lazy_import('slowmodule')
would be necessary.
Outside of the single additional level of indirection on module member access (and the two minor version availability difference), it's not immediately clear to me what, if anything, there is to be gained from this approach over the previous proxy module method, however.
Use importlib's Finder/Loader machinery to hook import (don't do this)
You could create a custom module Finder/Loader that would (only) hook your slowmodule import and, instead load, for example your proxy module.
Then you could just import that "importhook" module before you imported slowmode in your tests, e.g.
import myimporthooks
import slowmodule
def test_foo():
...
(Here, myimporthooks would use importlib's finder and loader machinery to do something simlar to the importhook package but intercept and redirect the import attempt rather than just serving as an import callback.)
But this is crazy. Not only is what you want (seemingly) achievable through (infinitely) more common and supported methods, but it's incredibly fragile, error-prone and, without diving into the internals of PyTest (which may mess with module loaders itself), it's hard to say whether it'd even work.
When Pytest collects files to be tested, modules are only imported once, even if the same import statement appears in multiple files.
To observe when my_module is imported, add a print statement and then use the Pytest -s flag (short for --capture=no), to ensure that all standard output is displayed.
my_module.py
answer: int = 42
print("MODULE IMPORTED: my_module.py")
You could then add your test fixture to a conftest.py file:
conftest.py
import pytest
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
Then in your test files, my_module.py may be imported to add type hints:
test_file_01.py
import my_module
def test_something(my_module_fix: my_module):
assert my_module_fix.answer == 42
test_file_02.py
import my_module
def test_something2(my_module_fix: my_module):
assert my_module_fix.answer == 42
Then run Pytest to display all standard output and verify that the module is only imported once at runtime.
pytest -s ./
Output from Pytest
platform linux -- Python 3.9.7, pytest-6.2.5
rootdir: /home/your_username/your_repo
collecting ... MODULE IMPORTED: my_module.py <--- Print statement executed once
collected 2 items
test_file_01.py .
test_file_02.py .
This is quick and naive, but can you possibly annotate your tests with the "quote-style" annotation that is meant for different purposes, but may suit here by skipping the import at runtime but still help your editor?
def test_something(my_module_fix: "my_module"):
In a quick test, this seems to accomplish it at least for my setup.
Although it might not be considered a 'best practice', to keep your specific use case simple, you could just lazily import the module directly in your test where you need it.
def test_something():
import my_module
assert my_module.my_func() = 5
I believe Pytest will only import the module when the applicable tests run. Pytest should also 'cache' the import as well so that if multiple tests import the module, it is only actually imported once. This may also solve your autocomplete issues for your editor.
Side-note: Avoid writing code in a specific way to cater for a specific editor. Keep it simple, not everyone who looks at your code will use Pycharm.
would like to have type-checking and autocompletion in my tests
It sounds like you want Something that fits as the type symbol of your test function:
def test_something(my_module_fix: Something):
assert my_module_fix.my_func() = 5
... and from this, [hopefully] your IDE can make some inferences about my_module_fix. I'm not a PyCharm user, so I can't speak to what it can tell you from type signatures, but I can say this isn't something that's readily available.
For some intuition, in this example -- Something is a ModuleType like 3 is an int. Analogously, accessing a nonexistent attribute of Something is like doing something not allowed with 3. Perhaps, like accessing an attribute of it (3.__name__).
But really, I seems like you're thinking about this from the wrong direction. The question a type signature answers is: what contract must this [these] argument[s] satisfy for this function to use it [them]. Using the example above, a type of 3 is the kind of thing that is too specific to make useful functions:
def add_to(i: 3):
return i
Perhaps a better name for your type is:
def test_something(my_module_fix: SomethingThatHasMyFuncMethod):
assert my_module_fix.my_func() = 5
So the type you probably want is something like this:
class SomethingThatHasMyFuncMethod:
def my_func(self) -> int: ...
You'll need to define it (maybe in a .pyi file). See here for info.
Finally, here's some unsolicited advice regarding:
importing the module is very expensive, so we don't want to do it on import-time
You should probably employ some method of making this module do it's thing lazily. Some of the utils in the django framework could serve as a reference point. There's also the descriptor protocol which is a bit harder to grok but may suit your needs.
You want to add type hinting for the arguments of test_something, in particular to my_module_fix. This question is hard because we can't import my_module at the top.
Well, what is the type of my_module? I'm going to assume that if you do import my_module and then type(my_module) you will get <class 'module'>.
If you're okay with not telling users exactly which module it has to be, then you could try this.
from types import ModuleType
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
def test_something(my_module_fix: ModuleType):
assert my_module_fix.my_func() = 5
The downside is that a user might infer that any old module would be suitable for my_module_fix.
You want it to be more specific? There's a cost to that, but we can get around some of it (it's hard to speak on performance in VS Code vs PyCharm vs others).
In that case, let's use TYPE_CHECKING. I think this was introduced in Python 3.6.
from types import ModuleType
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import my_module
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
def test_something(my_module_fix: my_module):
assert my_module_fix.my_func() = 5
You won't pay these costs during normal run time. You will pay the cost of importing my_module when your type checker is doing what it does.
If you're on an earlier verison of Python, you might need to do this instead.
from types import ModuleType
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import my_module
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
def test_something(my_module_fix: "my_module"):
assert my_module_fix.my_func() = 5
The type checker in VS Code is smart enough to know what's going on here. I can't speak for other type checkers.

How does the 'from' style import treat the imported module's dependencies?

I'm trying to figure out the python import system aliases the dependencies of a module imported using the 'from' keyword.
In particular, my use-case is writing unittest.mock.patch statements for mocking attributes of an object that is imported using "from", and that object uses something imported from another module. However, more broadly I just want to understand how the import system works.
My question may make more sense with an example:
Assume we have some class A in some module a, that uses a class from module b
a.py:
from b import B
class A:
def __init__(self):
B.use_some_static_method()
# OR
self.b_instance = B()
Then, for example, in some test code we want to mock B when testing our A object. How would I write a patch statement?
test_a.py:
from a import A
def test_a(mocker):
# Mock with pytest-mock's mocker
mock_b = mocker.patch('<some path>')
a = A()
a.do_something_with_B()
What path would I put in place of ? I realize that I could simply use import a instead, and then mock a.B, but I'm more interested in understanding the import system and how this works and why it works that way.

Python Mock Patch patches a used module for submodule under test and every other imported submodule in the submodule under test

Sorry for the rather confusing title, but find it really hard to get to a crisp headline.
My Problem:
I have a submodule under test lets call it backend. And I use random.choice in backend. I patch it in my test and that works. It is patched. backend imports pymongoto do some database stuff and pymongo uses random.choiceas well for selecting the right server in core.py.
Somehow my patch of backend.random.choice also is used for backend.pymongo.random.choice. I am not sure why. Is that the correct behavior? And if it is, what is the way to get around that? Preferebly without changing any code in backend and pymongo, but only in my tests.
Other investigation:
I set up a little test construct to see if that is something related to my project or a general thing.
I created some modules:
bar/bar.py
import random
def do_some_other_something():
return random.choice([10])
foo/foo.py
import random
from bar.bar import do_some_other_something
def do_something():
return random.choice([1])
def do_something_else():
return do_some_other_something()
And a test case:
from foo.foo import do_something, do_something_else
from unittest import mock
assert(do_something() == 1) # check
assert(do_something_else() == 10) # check
with mock.patch("foo.foo.random.choice") as mock_random_choice:
assert (do_something() != 1) # check (mocked as expected)
assert (do_something_else() == 10) # assertion! also mocked
So I am really confused about this. I explicitly mocked foo.foo's random.choice not bar.bar's. So why is that happening? Intended? Is there a way to fix that?
Thank you!
There's only one random module in the program. When foo.foo and bar.bar both import random, they share the random module. Patching the choice function on the random module modifies the one random module used by every part of the program, including both foo.foo and bar.bar.
Instead of patching foo.foo.random.choice, patch foo.foo.random. That replaces foo.foo's reference to the random module, but not bar.bar's reference. bar.bar will continue to access the real random module, but foo.foo will see a mock random.

How to mock an import

Module A includes import B at its top. However under test conditions I'd like to mock B in A (mock A.B) and completely refrain from importing B.
In fact, B isn't installed in the test environment on purpose.
A is the unit under test. I have to import A with all its functionality. B is the module I need to mock. But how can I mock B within A and stop A from importing the real B, if the first thing A does is import B?
(The reason B isn't installed is that I use pypy for quick testing and unfortunately B isn't compatible with pypy yet.)
How could this be done?
You can assign to sys.modules['B'] before importing A to get what you want:
test.py:
import sys
sys.modules['B'] = __import__('mock_B')
import A
print(A.B.__name__)
A.py:
import B
Note B.py does not exist, but when running test.py no error is returned and print(A.B.__name__) prints mock_B. You still have to create a mock_B.py where you mock B's actual functions/variables/etc. Or you can just assign a Mock() directly:
test.py:
import sys
sys.modules['B'] = Mock()
import A
The builtin __import__ can be mocked with the 'mock' library for more control:
# Store original __import__
orig_import = __import__
# This will be the B module
b_mock = mock.Mock()
def import_mock(name, *args):
if name == 'B':
return b_mock
return orig_import(name, *args)
with mock.patch('__builtin__.__import__', side_effect=import_mock):
import A
Say A looks like:
import B
def a():
return B.func()
A.a() returns b_mock.func() which can be mocked also.
b_mock.func.return_value = 'spam'
A.a() # returns 'spam'
Note for Python 3:
As stated in the changelog for 3.0, __builtin__ is now named builtins:
Renamed module __builtin__ to builtins (removing the underscores, adding an ‘s’).
The code in this answer works fine if you replace __builtin__ by builtins for Python 3.
How to mock an import, (mock A.B)?
Module A includes import B at its top.
Easy, just mock the library in sys.modules before it gets imported:
if wrong_platform():
sys.modules['B'] = mock.MagicMock()
and then, so long as A doesn't rely on specific types of data being returned from B's objects:
import A
should just work.
You can also mock import A.B:
This works even if you have submodules, but you'll want to mock each module. Say you have this:
from foo import This, That, andTheOtherThing
from foo.bar import Yada, YadaYada
from foo.baz import Blah, getBlah, boink
To mock, simply do the below before the module that contains the above is imported:
sys.modules['foo'] = MagicMock()
sys.modules['foo.bar'] = MagicMock()
sys.modules['foo.baz'] = MagicMock()
(My experience: I had a dependency that works on one platform, Windows, but didn't work on Linux, where we run our daily tests.
So I needed to mock the dependency for our tests. Luckily it was a black box, so I didn't need to set up a lot of interaction.)
Mocking Side Effects
Addendum: Actually, I needed to simulate a side-effect that took some time. So I needed an object's method to sleep for a second. That would work like this:
sys.modules['foo'] = MagicMock()
sys.modules['foo.bar'] = MagicMock()
sys.modules['foo.baz'] = MagicMock()
# setup the side-effect:
from time import sleep
def sleep_one(*args):
sleep(1)
# this gives us the mock objects that will be used
from foo.bar import MyObject
my_instance = MyObject()
# mock the method!
my_instance.method_that_takes_time = mock.MagicMock(side_effect=sleep_one)
And then the code takes some time to run, just like the real method.
Aaron Hall's answer works for me.
Just want to mention one important thing,
if in A.py you do
from B.C.D import E
then in test.py you must mock every module along the path, otherwise you get ImportError
sys.modules['B'] = mock.MagicMock()
sys.modules['B.C'] = mock.MagicMock()
sys.modules['B.C.D'] = mock.MagicMock()
I realize I'm a bit late to the party here, but here's a somewhat insane way to automate this with the mock library:
(here's an example usage)
import contextlib
import collections
import mock
import sys
def fake_module(**args):
return (collections.namedtuple('module', args.keys())(**args))
def get_patch_dict(dotted_module_path, module):
patch_dict = {}
module_splits = dotted_module_path.split('.')
# Add our module to the patch dict
patch_dict[dotted_module_path] = module
# We add the rest of the fake modules in backwards
while module_splits:
# This adds the next level up into the patch dict which is a fake
# module that points at the next level down
patch_dict['.'.join(module_splits[:-1])] = fake_module(
**{module_splits[-1]: patch_dict['.'.join(module_splits)]}
)
module_splits = module_splits[:-1]
return patch_dict
with mock.patch.dict(
sys.modules,
get_patch_dict('herp.derp', fake_module(foo='bar'))
):
import herp.derp
# prints bar
print herp.derp.foo
The reason this is so ridiculously complicated is when an import occurs python basically does this (take for example from herp.derp import foo)
Does sys.modules['herp'] exist? Else import it. If still not ImportError
Does sys.modules['herp.derp'] exist? Else import it. If still not ImportError
Get attribute foo of sys.modules['herp.derp']. Else ImportError
foo = sys.modules['herp.derp'].foo
There are some downsides to this hacked together solution: If something else relies on other stuff in the module path this kind of screws it over. Also this only works for stuff that is being imported inline such as
def foo():
import herp.derp
or
def foo():
__import__('herp.derp')
I found fine way to mock the imports in Python. It's Eric's Zaadi solution found here which I just use inside my Django application.
I've got class SeatInterface which is interface to Seat model class.
So inside my seat_interface module I have such an import:
from ..models import Seat
class SeatInterface(object):
(...)
I wanted to create isolated tests for SeatInterface class with mocked Seat class as FakeSeat. The problem was - how tu run tests offline, where Django application is down. I had below error:
ImproperlyConfigured: Requested setting BASE_DIR, but settings are not
configured. You must either define the environment variable
DJANGO_SETTINGS_MODULE or call settings.configure() before accessing
settings.
Ran 1 test in 0.078s
FAILED (errors=1)
The solution was:
import unittest
from mock import MagicMock, patch
class FakeSeat(object):
pass
class TestSeatInterface(unittest.TestCase):
def setUp(self):
models_mock = MagicMock()
models_mock.Seat.return_value = FakeSeat
modules = {'app.app.models': models_mock}
patch.dict('sys.modules', modules).start()
def test1(self):
from app.app.models_interface.seat_interface import SeatInterface
And then test magically runs OK :)
.
Ran 1 test in 0.002s
OK
If you do an import ModuleB you are really calling the builtin method __import__ as:
ModuleB = __import__('ModuleB', globals(), locals(), [], -1)
You could overwrite this method by importing the __builtin__ module and make a wrapper around the __builtin__.__import__method. Or you could play with the NullImporter hook from the imp module. Catching the exception and Mock your module/class in the except-block.
Pointer to the relevant docs:
docs.python.org: __import__
Accessing Import internals with the imp Module
I hope this helps. Be HIGHLY adviced that you step into the more arcane perimeters of python programming and that a) solid understanding what you really want to achieve and b)thorough understanding of the implications is important.
I know this is a fairly old question, but I have found myself returning to it a few times recently, and wanted to share a concise solution to this.
import sys
from unittest import mock
def mock_module_import(module):
"""Source: https://stackoverflow.com/a/63584866/3972558"""
def _outer_wrapper(func):
def _inner_wrapper(*args, **kwargs):
orig = sys.modules.get(module) # get the original module, if present
sys.modules[module] = mock.MagicMock() # patch it
try:
return func(*args, **kwargs)
finally:
if orig is not None: # if the module was installed, restore patch
sys.modules[module] = orig
else: # if the module never existed, remove the key
del sys.modules[module]
return _inner_wrapper
return _outer_wrapper
It works by temporarily patching the key for the module in sys.modules, and then restoring the original module after the decorated function is called. This can be used in scenarios where a package may not be installed in the testing environment, or a more complex scenario where the patched module might actually perform some of its own internal monkey-patching (which was the case I was facing).
Here's an example of use:
#mock_module_import("some_module")
def test_foo():
# use something that relies upon "some_module" here
assert True
I found myself facing a similar problem today, and I've decided to solve it a bit differently. Rather than hacking on top of Python's import machinery, you can simply add the mocked module into sys.path, and have Python prefer it over the original module.
Create the replacement module in a subdirectory, e.g.:
mkdir -p test/mocked-lib
${EDITOR} test/mocked-lib/B.py
Before A is imported, insert this directory to sys.path. I'm using pytest, so in my test/conftest.py, I've simply done:
import os.path
import sys
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "mocked-lib"))
Now, when the test suite is run, the mocked-lib subdirectory is prepended into sys.path and import A uses B from mocked-lib.

Mock Y of (from X import Y) in doctest (python)

I'm trying to create a doctest with mock of function that resides in a separate module
and that is imported as bellow
from foomodule import foo
def bar():
"""
>>> from minimock import mock
>>> mock('foo', nsdicts=(bar.func_globals,), returns=5)
>>> bar()
Called foo()
10
"""
return foo() * 2
import doctest
doctest.testmod()
foomodule.py:
def foo():
raise ValueError, "Don't call me during testing!"
This fails.
If I change import to import foomodule
and use foomodule.foo everywhere
Then it works.
But is there any solution for mocking function imported the way above?
You've just met one of the many reasons that make it best to never import object from "within" modules -- only modules themselves (possibly from within packages). We've made this rule part of our style guidelines at Google (published here) and I heartily recommend it to every Python programmer.
That being said, what you need to do is to take the foomodule.foo that you've just replaced with a mock and stick it in the current module. I don't recall enough of doctest's internal to confirm whether
>>> import foomodule
>>> foo = foomodule.foo
will suffice for that -- give it a try, and if it doesn't work, do instead
>>> import foomodule
>>> import sys
>>> sys.modules[__name__].foo = foomodule.foo
yeah, it's a mess, but the cause of that mess is that innocent-looking from foomodule import foo -- eschew that, and your life will be simpler and more productive;-).
Finally, found out that this was rather an issue of trunk version of MiniMock.
Old stable one performs as expected.

Categories

Resources