Testing constants declarations using pytest - python

We have a Python 3.7 application that has a declared constants.py file that has this form:
APP_CONSTANT_1 = os.environ.get('app-constant-1-value')
In a test.py we were hoping to test the setting of these constants using something like this (this is highly simplified but represents the core issue):
class TestConfig:
"""General config tests"""
#pytest.fixture
def mock_os_environ(self, monkeypatch):
""" """
def mock_get(*args, **kwargs):
return 'test_config_value'
monkeypatch.setattr(os.environ, "get", mock_get)
def test_mock_env_vars(self, mock_os_environ):
import constants
assert os.environ.get('app-constant-1-value') == 'test_config_value' #passes
assert constants.APP_CONSTANT_1 == 'test_config_value' #fails
The second assertion fails as constants.constants.APP_CONSTANT_1 is None. Turns out that the constants.py seems to be loaded during pytest's 'collecting' phase and thus is already set by the time the test is run.
What are we missing here? I feel like there is a simple way to resolve this in pytest but haven't yet discovered the secret. Is there some way to avoid loading the constants file prior to the tests being run? Any ideas are appreciated.

The problem is most likely that constants has been loaded before. To make sure it gets the patched value, you have to reload it:
import os
from importlib import reload
import pytest
import constants
class TestConfig:
"""General config tests"""
#pytest.fixture
def mock_os_environ(self, monkeypatch):
""" """
monkeypatch.setenv('app-constant-1-value', 'test_config_value')
reload(constants)
def test_mock_env_vars(self, mock_os_environ):
assert os.environ.get('app-constant-1-value') == 'test_config_value'
assert app.APP_CONSTANT_1 == 'test_config_value'
Note that I used monkeypatch.setenv to specifically set the variable you need. If you don't need to change all environment variables, this is easier to use.

Erm, I would avoid using constants. You can subclass os.environment for a start, and then use a mocked subclass for your unit tests, so you can have my_env.unique_env as a member variable. You can then use eg. import json to use a json configuration file without getting involved with hard coded python.
The subclass can then hold the relevant variables (or methods if you prefer)
Being able to add a facade to os.environment provides you with the abstraction you are looking for, without any of the problems.
Even is one is using a legacy/larger project, the advantage of using an adapter for access to the environment must be apparent.
Since you are writing unit tests, there is an opportunity to use an adapter class in both the tests and the functions being tested.

Related

Annotate a function argument as being a specific module

I have a pytest fixture that imports a specific module. This is needed as importing the module is very expensive, so we don't want to do it on import-time (i.e. during pytest test collection). This results in code like this:
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
def test_something(my_module_fix):
assert my_module_fix.my_func() = 5
I am using PyCharm and would like to have type-checking and autocompletion in my tests. To achieve that, I would somehow have to annotate the my_module_fix parameter as having the type of the my_module module.
I have no idea how to achieve that. All I found is that I can annotate my_module_fix as being of type types.ModuleType, but that is not enough: It is not any module, it is always my_module.
If I get your question, you have two (or three) separate goals
Deferred import of slowmodule
Autocomplete to continue to work as if it was a standard import
(Potentially?) typing (e.g. mypy?) to continue to work
I can think of at least five different approaches, though I'll only briefly mention the last because it's insane.
Import the module inside your tests
This is (by far) the most common and IMHO preferred solution.
e.g. instead of
import slowmodule
def test_foo():
slowmodule.foo()
def test_bar():
slowmodule.bar()
you'd write:
def test_foo():
import slowmodule
slowmodule.foo()
def test_bar():
import slowmodule
slowmodule.bar()
[deferred importing] Here, the module will be imported on-demand/lazily. So if you have pytest setup to fail-fast, and another test fails before pytest gets to your (test_foo, test_bar) tests, the module will never be imported and you'll never incur the runtime cost.
Because of Python's module cache, subsequent import statements won't actually re-import the module, just grab a reference to the already-imported module.
[autocomplete/typing] Of course, autocomplete will continue to work as you expect in this case. This is a perfectly fine import pattern.
While it does require adding potentially many additional import statements (one inside each test function), it's immediately clear what is going on (regardless of whether it's clear why it's going on).
[3.7+] Proxy your module with module __getattr__
If you create a module (e.g. slowmodule_proxy.py) with the contents like:
def __getattr__(name):
import slowmodule
return getattr(slowmodule, name)
And in your tests, e.g.
import slowmodule
def test_foo():
slowmodule.foo()
def test_bar():
slowmodule.bar()
instead of:
import slowmodule
you write:
import slowmodule_proxy as slowmodule
[deferred import] Thanks to PEP-562, you can "request" any name from slowmodule_proxy and it will fetch and return the corresponding name from slowmodule. Just as above, including the import inside the function will cause slowmodule to be imported only when the function is called and executed instead of on module load. Module caching still applies here of course, so you're only incurring the import penalty once per interpreter session.
[autocomplete] However, while deferred importing will work (and your tests run without issue), this approach (as stated so far) will "break" autocomplete:
Now we're in the realm of PyCharm. Some IDEs will perform "live" analysis of modules and actually load up the module and inspect its members. (PyDev had this option). If PyCharm did this, implementing module.__dir__ (same PEP) or __all__ would allow your proxy module to masquerade as the actual slowmodule and autocomplete would work.† But, PyCharm does not do this.
Nonetheless, you can fool PyCharm into giving you autocomplete suggestions:
if False:
import slowmodule
else:
import slowmodule_proxy as slowmodule
The interpreter will only execute the else branch, importing the proxy and naming it slowmodule (so your test code can continue to reference slowmodule unchanged).
But PyCharm will now provide autocompletion for the underlying module:
† While live-analysis can be an incredibly helpful, there's also a (potential) security concern that comes with it that static syntax analysis doesn't have. And the maturation of type hinting and stub files has made it less of an issue still.
Proxy slowmodule explicitly
If you really hated the dynamic proxy approach (or the fact that you have to fool PyCharm in this way), you could proxy the module explicitly.
(You'd likely only want to consider this if the slowmodule API is stable.)
If slowmodule has methods foo and bar you'd create a proxy module like:
def foo(*args, **kwargs):
import slowmodule
return slowmodule.foo(*args, **kwargs)
def bar(*args, **kwargs):
import slowmodule
return slowmodule.bar(*args, **kwargs)
(Using args and kwargs to pass arguments through to the underlying callables. And you could add type hinting to these functions to mirror the slowmodule functions.)
And in your test,
import slowmodule_proxy as slowmodule
Same as before. Importing inside the method gives you the deferred importing you want and the module cache takes care of multiple import calls.
And since it's a real module whose contents can be statically analyzed, there's no need to "fool" PyCharm.
So the benefit of this solution is that you don't have a bizarre looking if False in your test imports. This, however, comes at the (substantial) cost of having to maintain a proxy file alongside your module -- which could prove painful in the case that slowmodule's API wasn't stable.
[3.5+] Use importlib's LazyLoader instead of a proxy module
Instead of the proxy module slowmodule_proxy, you could follow a pattern similar to the one shown in the importlib docs
>>> import importlib.util
>>> import sys
>>> def lazy_import(name):
... spec = importlib.util.find_spec(name)
... loader = importlib.util.LazyLoader(spec.loader)
... spec.loader = loader
... module = importlib.util.module_from_spec(spec)
... sys.modules[name] = module
... loader.exec_module(module)
... return module
...
>>> lazy_typing = lazy_import("typing")
>>> #lazy_typing is a real module object,
>>> #but it is not loaded in memory yet.
You'd still need to fool PyCharm though, so something like:
if False:
import slowmodule
else:
slowmodule = lazy_import('slowmodule')
would be necessary.
Outside of the single additional level of indirection on module member access (and the two minor version availability difference), it's not immediately clear to me what, if anything, there is to be gained from this approach over the previous proxy module method, however.
Use importlib's Finder/Loader machinery to hook import (don't do this)
You could create a custom module Finder/Loader that would (only) hook your slowmodule import and, instead load, for example your proxy module.
Then you could just import that "importhook" module before you imported slowmode in your tests, e.g.
import myimporthooks
import slowmodule
def test_foo():
...
(Here, myimporthooks would use importlib's finder and loader machinery to do something simlar to the importhook package but intercept and redirect the import attempt rather than just serving as an import callback.)
But this is crazy. Not only is what you want (seemingly) achievable through (infinitely) more common and supported methods, but it's incredibly fragile, error-prone and, without diving into the internals of PyTest (which may mess with module loaders itself), it's hard to say whether it'd even work.
When Pytest collects files to be tested, modules are only imported once, even if the same import statement appears in multiple files.
To observe when my_module is imported, add a print statement and then use the Pytest -s flag (short for --capture=no), to ensure that all standard output is displayed.
my_module.py
answer: int = 42
print("MODULE IMPORTED: my_module.py")
You could then add your test fixture to a conftest.py file:
conftest.py
import pytest
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
Then in your test files, my_module.py may be imported to add type hints:
test_file_01.py
import my_module
def test_something(my_module_fix: my_module):
assert my_module_fix.answer == 42
test_file_02.py
import my_module
def test_something2(my_module_fix: my_module):
assert my_module_fix.answer == 42
Then run Pytest to display all standard output and verify that the module is only imported once at runtime.
pytest -s ./
Output from Pytest
platform linux -- Python 3.9.7, pytest-6.2.5
rootdir: /home/your_username/your_repo
collecting ... MODULE IMPORTED: my_module.py <--- Print statement executed once
collected 2 items
test_file_01.py .
test_file_02.py .
This is quick and naive, but can you possibly annotate your tests with the "quote-style" annotation that is meant for different purposes, but may suit here by skipping the import at runtime but still help your editor?
def test_something(my_module_fix: "my_module"):
In a quick test, this seems to accomplish it at least for my setup.
Although it might not be considered a 'best practice', to keep your specific use case simple, you could just lazily import the module directly in your test where you need it.
def test_something():
import my_module
assert my_module.my_func() = 5
I believe Pytest will only import the module when the applicable tests run. Pytest should also 'cache' the import as well so that if multiple tests import the module, it is only actually imported once. This may also solve your autocomplete issues for your editor.
Side-note: Avoid writing code in a specific way to cater for a specific editor. Keep it simple, not everyone who looks at your code will use Pycharm.
would like to have type-checking and autocompletion in my tests
It sounds like you want Something that fits as the type symbol of your test function:
def test_something(my_module_fix: Something):
assert my_module_fix.my_func() = 5
... and from this, [hopefully] your IDE can make some inferences about my_module_fix. I'm not a PyCharm user, so I can't speak to what it can tell you from type signatures, but I can say this isn't something that's readily available.
For some intuition, in this example -- Something is a ModuleType like 3 is an int. Analogously, accessing a nonexistent attribute of Something is like doing something not allowed with 3. Perhaps, like accessing an attribute of it (3.__name__).
But really, I seems like you're thinking about this from the wrong direction. The question a type signature answers is: what contract must this [these] argument[s] satisfy for this function to use it [them]. Using the example above, a type of 3 is the kind of thing that is too specific to make useful functions:
def add_to(i: 3):
return i
Perhaps a better name for your type is:
def test_something(my_module_fix: SomethingThatHasMyFuncMethod):
assert my_module_fix.my_func() = 5
So the type you probably want is something like this:
class SomethingThatHasMyFuncMethod:
def my_func(self) -> int: ...
You'll need to define it (maybe in a .pyi file). See here for info.
Finally, here's some unsolicited advice regarding:
importing the module is very expensive, so we don't want to do it on import-time
You should probably employ some method of making this module do it's thing lazily. Some of the utils in the django framework could serve as a reference point. There's also the descriptor protocol which is a bit harder to grok but may suit your needs.
You want to add type hinting for the arguments of test_something, in particular to my_module_fix. This question is hard because we can't import my_module at the top.
Well, what is the type of my_module? I'm going to assume that if you do import my_module and then type(my_module) you will get <class 'module'>.
If you're okay with not telling users exactly which module it has to be, then you could try this.
from types import ModuleType
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
def test_something(my_module_fix: ModuleType):
assert my_module_fix.my_func() = 5
The downside is that a user might infer that any old module would be suitable for my_module_fix.
You want it to be more specific? There's a cost to that, but we can get around some of it (it's hard to speak on performance in VS Code vs PyCharm vs others).
In that case, let's use TYPE_CHECKING. I think this was introduced in Python 3.6.
from types import ModuleType
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import my_module
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
def test_something(my_module_fix: my_module):
assert my_module_fix.my_func() = 5
You won't pay these costs during normal run time. You will pay the cost of importing my_module when your type checker is doing what it does.
If you're on an earlier verison of Python, you might need to do this instead.
from types import ModuleType
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import my_module
#pytest.fixture
def my_module_fix():
import my_module
yield my_module
def test_something(my_module_fix: "my_module"):
assert my_module_fix.my_func() = 5
The type checker in VS Code is smart enough to know what's going on here. I can't speak for other type checkers.

"Mocking where it's defined" in python mock?

I want to test an evolving SQLite database application, which is in parallel used "productively". In fact I am investigating a pile of large text files by importing them to the database and fiddling around with it. I am used to develop test-driven, and I do not want to drop that for this investigation. But running tests against the "production" database feels somewhat strange. So my objective is to run the tests against a test database (a real SQLite database, not a mock) containing a controlled, but considerable amount of real data showing all kinds of variability I have met during the investigation.
To support this approach, I have a central module myconst.py containing a function returning the name of the database that is used like so:
import myconst
conn = sqlite3.connect(myconst.get_db_path())
Now in the unittest TestCases, I thought about mocking like so:
#patch("myconst.get_db_name", return_value="../test.sqlite")
def test_this_and_that(self, mo):
...
where the test calls functions that will, in nested functions, access the database using myconst.get_db_path().
I have tried to do a little mocking for myself first, but it tends to be clumsy and error prone so I decided to dive into the python mock module as shown before.
Unfortunately, I found warnings all over, that I am supposed to "mock where it's used and not where it's defined" like so:
#patch("mymodule.myconst.get_db_name", return_value="../test.sqlite")
def test_this_and_that(self, mo):
self.assertEqual(mymodule.func_to_be_tested(), 1)
But mymodule will likely not call database functions itself but delegate that to another module. This in turn would imply that my unit tests have to know the call tree where the database is actually access – something I really want to avoid because it would lead to unnecessary test refactoring when the code is refactored.
So I tried to create a minimal example to understand the behavior of mock and where it fails to allow me to mock "at the source". Because a multi module setup is clumsy here, I have provided the original code also on github for everybody's convenience. See this:
myconst.py
----------
# global definition of the database name
def get_db_name():
return "../production.sqlite"
# this will replace get_db_name()
TEST_VALUE = "../test.sqlite"
def fun():
return TEST_VALUE
inner.py
--------
import myconst
def functio():
return myconst.get_db_name()
print "inner:", functio()
test_inner.py
-------------
from mock import patch
import unittest
import myconst, inner
class Tests(unittest.TestCase):
#patch("inner.myconst.get_db_name", side_effect=myconst.fun)
def test_inner(self, mo):
"""mocking where used"""
self.assertEqual(inner.functio(), myconst.TEST_VALUE)
self.assertTrue(mo.called)
outer.py
--------
import inner
def functio():
return inner.functio()
print "outer:", functio()
test_outer.py
-------------
from mock import patch
import unittest
import myconst, outer
class Tests(unittest.TestCase):
#patch("myconst.get_db_name", side_effect=myconst.fun)
def test_outer(self, mo):
"""mocking where it comes from"""
self.assertEqual(outer.functio(), myconst.TEST_VALUE)
self.assertTrue(mo.called)
unittests.py
------------
"""Deeply mocking a database name..."""
import unittest
print(__doc__)
suite = unittest.TestLoader().discover('.', pattern='test_*.py')
unittest.TextTestRunner(verbosity=2).run(suite)
test_inner.py works like the sources linked above say, and so I was expecting it to pass. test_outer.py should fail when I understand the caveats right. But all the tests pass without complaint! So my mock is drawn all the time, even when the mocked function is called from down the callstack like in test_outer.py. From that example I would conclude that my approach is safe, but on the other hand the warnings are consistent across quite some sources and I do not want to recklessly risk my "production" database by using concepts that I do not grok.
So my question is: Do I misunderstand the warnings or are these warnings just over-cautious?
Finally I sorted it out. Maybe this will help future visitors, so I will share my findings:
When changing the code like so:
inner.py
--------
from myconst import get_db_name
def functio():
return get_db_name()
test_inner.py
-------------
#patch("inner.get_db_name", side_effect=myconst.fun)
def test_inner(self, mo):
self.assertEqual(inner.functio(), myconst.TEST_VALUE)
test_inner will succeed, but test_outer will break with
AssertionError: '../production.sqlite' != '../test.sqlite'
This is because mock.patch will not replace the referenced object, which is function get_db_name in module myconst in both cases. mock will instead replace the usages of the name "myconst.get_db_name" by the Mock object passed as the second parameter to the test.
test_outer.py
-------------
#patch("myconst.get_db_name", side_effect=myconst.fun)
def test_outer(self, mo):
self.assertEqual(outer.functio(), myconst.TEST_VALUE)
Since I mock only "myconst.getdb_name" here and inner.py accesses get_db_name via "inner.get_db_name", the test will fail.
By using the proper name, however, this can be fixed:
#patch("outer.inner.get_db_name", return_value=myconst.TEST_VALUE)
def test_outer(self, mo):
self.assertEqual(outer.functio(), myconst.TEST_VALUE)
So the conclusion is that my approach will be safe when I make sure that all modules accessing the database include myconst and use myconst.get_db_name. Alternatively all modules could from myconst import get_db_name and use get_db_name. But I have to draw this decision globally.
Because I control all code accessing get_db_name I am safe. One can argue whether this is good style or not (assumingly the latter), but technically it's safe. Would I mock a library function instead, I could hardly control access to that function and so mocking "where it's defined" becomes risky. This is why the sources cited are warning.

How do I mock the hierarchy of non-existing modules?

Let's assume that we have a system of modules that exists only on production stage. At the moment of testing these modules do not exist. But still I would like to write tests for the code that uses those modules. Let's also assume that I know how to mock all the necessary objects from those modules. The question is: how do I conveniently add module stubs into current hierarchy?
Here is a small example. The functionality I want to test is placed in a file called actual.py:
actual.py:
def coolfunc():
from level1.level2.level3_1 import thing1
from level1.level2.level3_2 import thing2
do_something(thing1)
do_something_else(thing2)
In my test suite I already have everything I need: I have thing1_mock and thing2_mock. Also I have a testing function. What I need is to add level1.level2... into current module system. Like this:
tests.py
import sys
import actual
class SomeTestCase(TestCase):
thing1_mock = mock1()
thing2_mock = mock2()
def setUp(self):
sys.modules['level1'] = what should I do here?
#patch('level1.level2.level3_1.thing1', thing1_mock)
#patch('level1.level2.level3_1.thing1', thing2_mock)
def test_some_case(self):
actual.coolfunc()
I know that I can substitute sys.modules['level1'] with an object containing another object and so on. But it seems like a lot of code for me. I assume that there must be much simpler and prettier solution. I just cannot find it.
So, no one helped me with my problem and I decided to solve it by myself. Here is a micro-lib called surrogate which allows one to create stubs for non-existing modules.
Lib can be used with mock like this:
from surrogate import surrogate
from mock import patch
#surrogate('this.module.doesnt.exist')
#patch('this.module.doesnt.exist', whatever)
def test_something():
from this.module.doesnt import exist
do_something()
Firstly #surrogate decorator creates stubs for non-existing modules, then #patch decorator can alter them. Just as #patch, #surrogate decorators can be used "in plural", thus stubbing more than one module path. All stubs exist only at the lifetime of decorated function.
If anyone gets any use of this lib, that would be great :)

How do I stub a class in a module in Python for testing?

I have a module I am using which uses RealClass, so it is an internal dependency I don't have access to.
I want to be able to create a FakeClass which replaces the functionality of the RealClass for testing. I don't want to replace individual methods but the entire class.
I looked at stubble which seems to be what I want but I was wondering if mox or any of the other mocking frameworks have this functionality? Or what would you suggest to use? Maybe fudge, monkey-patching? Just looking for best practices with this stuff. Also any useful examples would be awesome.
Pseudo code:
from module import RealClass
class FakeClass
methodsFromRealClassOverridden
class Test(unittest.TestCase):
setup()
teardown()
test1()
stub(RealClass, FakeClass) // something like this, but really just want the functionality
classThatUsesRealClass // now will use FakeClass
UPDATE:
Here's one way I found to do it. It isn't perfect but it works.
Example:
fake = FakeClass()
stub = stubout.StubOutForTesting()
stub.Set(RealClass, 'method_1', fake.method_1)
stub.Set(RealClass, 'method_2', fake.method_2)
I think you want opinions/experiences so I'm just giving my 2 cents.
As you noticed there are a few Python testing tools/classes/frameworks, but most of the time given the simplicity/dynamism/openness of Python you will limit yourself to using ad-hoc relevant test cases which involve stubbing at the interface level, and a bit of unittest... until you start using the frameworks.
There is nothing pejorative about monkey-patching, especially when it comes to performing testing/stubbing:
#!/usr/bin/env python
# minimal example of library code
class Class:
""" a class """
def method(self, arg):
""" a method that does real work """
print("pouet %s" % arg)
#!/usr/bin/env python
# minimal example for stub and tests, overriding/wrapping one method
from Class import Class
Class._real_method = Class.method
def mymethod(self, arg):
# do what you want
print("called stub")
# in case you want to call the real function...
self._real_method(arg)
Class.method = mymethod
# ...
e = Class()
e.method("pouet")
Namespaces will allow you to patch stuff inside of imported modules inside of imported modules...
Note that the above method does not work with classes in C modules.
For them you can use a wrapper class that filters on class member names using getattr/setattr, and returns the redefined members from the wrapper class.
#!/usr/bin/env python
# Stupid minimal example replacing the sys module
# (not very useful / optimal, it's just an example of patching)
import sys
class SysWrap():
real = sys
def __getattr__(self, attr):
if attr == 'stderr':
class StdErr():
def write(self, txt):
print("[err: %s]" % txt)
return StdErr()
print("Getattr %s" % attr)
return getattr(SysWrap.real, attr)
sys = SysWrap()
# use the real stdout
sys.stdout.write("pouet")
# use fake stderr
sys.stderr.write("pouet")
Once you are becoming tired of performing ad-hoc testing, you'll find higher level stuff such as the ones you mentioned (stubble, fudge) useful, but to enjoy them and use them efficiently you have to first see the problems they solve and accept all the automatic stuff they do under the hood.
It is probable that a part of ad-hoc monkey patching will remain, it's just easier to understand, and all the tools have some limitations.
Tools empower you but you have to deeply understand them to use them efficiently.
An important aspect when deciding whether to use a tool or not is that when you transmit a chunk of code, you transmit the whole environment (including testing tools).
The next guy might not be as smart as you and skip the testing because your testing tool is too complex for him.
Generally you want to avoid using a lot of dependencies in your software.
In the end, I think that nobody will bother you if you just use unittest and ad-hoc tests/monkey-patching, provided your stuff works.
Your code might not be that complex anyway.

How can I inject an object into another namespace in python?

I'm writing some unittests for code written by someone else here at the office. Python is not my strongest language. While I've been successful with basic unit tests, mocking in python is throwing me for a loop.
What I need to do is override a call to ConfigObj and inject my own mock config/fixture into any ConfigObj call.
settings.py
from configobj import ConfigObj
config = ConfigObj('/etc/myapp/config')
utils.py
from settings import config
"""lots of stuff methods using various config values."""
What I would like to do is, in my unittests for utils.py, inject myself either for ANY call to ConfigObj or settings.py itself.
Many of the mocking libraries expect me to Mock my own classes but in the case of this app, it doesn't have any explicit classes.
Can it be done or are the python namespace restrictions too strict that I can't intervene in what a module that I'm importing imports itself?
Side note: running 2.7 so I can't do any of the tricks I've read about from 2.5.
If the tests are in a separate file from from settings.py and utils.py you can create a file mock.py
import configobj
class MockConfigObj(object):
#mock whatever you wan
configobj.ConfigObj = MockConfigObj
and then import mock before importing (from) any module that itself imports settings. This will ensure that settings.config is created with MockConfigObj. If you want a uniform global mocking, import it before any file that imports configobj.
This works because python will store configobj in sys.modules and check that before actually reading from a file on subsequent imports. in mock.py, the identifier ConfigObj is just a reference to the entry in sys.modules so that any changes that you make will be globally visible.
This strikes me as a little hacky though but it's the best that I can think of.
Python namespaces are not strict at all within the same scope. Just override the variable name containing your object (or the class itself and provided it) within the same scope you'd be expecting the original and that is good enough.
Now, whether or not what you're replacing it with behaves the same is up to you...
Couldn't you just overwrite the original function with another one?
There are no constants in Python, you can change everything, you could even do True = False.
I faced a similar situation before. Here is how I would go about addressing your problem.
Consider a test case for a function from utils.py.
import utils, unittest
class FooFunctionTests(unittest.TestCase):
def setUp(self):
utils._old_config = utils.config
utils.config = MockClass()
def tearDown(self):
utils.config = utils._old_config
del utils._old_config
def test_foo_function_returns_correct_value(self):
self.assertEqual("success!", utils.foo())
The following page is a good one on mocking and import
http://www.relaxdiego.com/2014/04/mocking-objects-in-python.html
Say you have a file named my_package1.py with the following code:
class A(object):
def init(self):
and you then import that in my_package2.py with the code
from my_package1 import A
class A(object):
def init(self):
The first line of my_package2.py creates a variable under the my_package2 namespace called A. Now you have two variables my_package1.A and my_package2.A that both point to the same class in memory. If you want the code in my_package2.py to use a mocked up class A, then you will need to mock my_package2.A not my_package1.A

Categories

Resources