Let's say we have three python modules: app.py, commons.py and c.py
commons.py:
def func():
pass
def init(app)
global func
func = app.func
app.py:
import commons
import c
# Doing some initialization so that we are ready to override the function
c.foo()
init(app) # the app object contains a function called func which we wish to overwrite in commons.py
c.foo()
c.py:
from commons import *
def foo():
func()
What i want to accomplish is that the first time c.foo is called, it will do nothing, and the second time it will execute app.func. Every module in my project is dependent on the func in commons.py, and the function has to be declared at runtime as it is attached to an object.
Is there a way to import by reference in python and change the pointer of the function in all modules?
What I've done so far is to have an init function in every module and simply overwrite empty lambda functions, but this seems to be a poor solution.
You can achieve this with a further level of indirection:
commons.py
def normalFunc():
pass
def func():
normalFunc()
def init(app)
global normalFunc
normalFunc = app.func
This works, because any other module which has the normal: from commons import foo and foo() still calls the same foo().
Its just that the internals of foo() change each time that the init() function is called.
Related
I need to create a module-scoped fixture where I mock module_a and module_c used inside module_b.Module_B_Class(). I cannot use mock.patch annotation because it provides a function-scoped mock and I also need to assert that, when invoking Module_B_Class a specific function is invoked on module_a and another function is invoked on module_c. So I used pytest-cases unpack_into feature and wrote the following fixture:
#pytest_cases.fixture_plus(scope="module", unpack_into="mocked_module_a,mocked_module_b")
def my_fixture():
with mock.patch('my_top_module.my_sub_module.module_b.module_a') as module_a_mock:
with mock.patch('my_top_module.my_sub_module.module_b.module_c') as module_c_mock:
module_a_mock.my_func = MagicMock(return_value='Hello world')
module_c_mock.my_func_2 = MagicMock(return_value='Good morning')
However, when I run the following:
def test_my_class(mocked_module_a, mocked_module_b):
my_class = Module_B_Class()
my_class.run()
mocked_module_a.assert_called_once()
mocked_module_b.assert_called_once()
which is defined like so
from my_top_module.my_sub_module import module_a
class Module_B_Class():
def run(self):
module_a.my_func()
module_c.my_func2()
the function which is invoked is the original one and not the replaced one. Is the target I am patching the wrong one?
Fixtures are described in the pytest documentation. The basic principle is that the code before the yield is executed before the test (or depending on the scope, before the first test, each module, or each test class), and the code after the yield is executed after the test (or the last test, the module, or class):
#pytest.fixture
def my_fixture():
do_setup()
yield
do_teardown()
You can also return a value using yield, of course.
For a context manager that means, you have to yield before going out of scope:
#pytest.fixture(scope="module")
def my_fixture():
with mock.patch('my_top_module.my_sub_module.module_b.module_a') as module_a_mock:
module_a_mock.my_func = MagicMock(return_value='Hello world')
yield module_a_mock
You can now access the mock via the fixture name in your test, if you need to. In this case, the code returns after the yield only after the tests in the current module are executed, so at that point the patch is reverted.
If you don't do the yield in this case, you get out of scope immediately on fixture execution, meaning that the patch is reverted before you get to the test.
UPDATE:
Here is the version for the updated question which uses pytest_cases:
#pytest_cases.fixture_plus(scope="module",
unpack_into="mocked_module_a,mocked_module_c")
def my_fixture():
with mock.patch(
'my_top_module.my_sub_module.module_b.module_a') as module_a_mock:
with mock.patch(
'my_top_module.my_sub_module.module_b.module_c') as module_c_mock:
module_a_mock.my_func = mock.MagicMock(return_value='Hello world')
module_c_mock.my_func2 = mock.MagicMock(return_value='Good morning')
yield (module_a_mock, module_c_mock)
def test_my_class(mocked_module_a, mocked_module_c):
my_class = Module_B_Class()
my_class.run()
mocked_module_a.my_func.assert_called_once()
mocked_module_c.my_func2.assert_called_once()
Note: I have renamed mocked_module_b to mocked_module_c to avoid confusion. Also assert_called_once had been called on the module instead of the function.
I want to test a module A which uses decorators with arguments. The arguments get evaluated when the module A is loaded. For some of the decorator args, I set the value by calling a function foo in module B.
# A.py
import B
#deco(arg1=B.foo())
def bar():
...
When I want to test A, I want to mock B.foo so that the decorator argument is set for my test cases. I think that B.foo must be mocked before A loads B.
In the unit test, as a caller of A, how do I mock B.foo to ensure the mocked version is used when evaluating the decorator arguments in A?
If you want to ensure that the mock is really used, you have to reload module A after patching foo, as bar already had been evaluated with the original foo. You could wrap that in a fixture like this (untested):
import importlib
import pytest
from unittest import mock
import A
#pytest.fixture
def mocked_foo():
with mock.patch('B.foo') as mocked:
importlib.reload(A)
yield mocked
def test_bar(mocked_foo):
...
I have a following code:
class A:
def m(self):
print(...) # 'my_var' should be printed
my_var = A()
my_var.m()
What code should I type inside of m to output the name of the created variable? I need a solution for Python 3.
Objects don't know what namespace variables they happen to be bound to at any given time. Such variables have a forward reference to the object, but the object does not maintain a back reference to the potentially-many variables. Imported modules are listed in sys.modules and you can scan those for your object. This would catch module-level variables but not any container, class, class instance or local function namespace that happens to hold the variable also.
test1.py - Implements class that scans for itself in known modules
import sys
class A:
def m(self):
for name, module in sys.modules.items():
try:
for varname, obj in module.__dict__.items():
if obj is self:
print('{}.{}'.format(name, varname))
except AttributeError:
pass
a = A()
test2.py - tests the code
import test1
my_a = test1.a
my_a.m()
running the code
$ python3 test2.py
__main__.my_a
test1.a
This might be a terribly simple one, but I don't know what's the "right" answer. Assume that I have a script
import utils
bar = 1
utils.foo()
print bar
Furthermore, the module utils is:
def foo():
bar = bar+1
As given above, I ,obviously, get:
UnboundLocalError: local variable 'bar' referenced before assignment
How can I use bar inside foo()? In my specific case, I don't really want to alter foo, but I do need to be able to use it and its state inside foo().
One workaround would be to pass bar to foo():
def foo(bar):
return bar+1
And replace the third line in the script: bar = utils.foo(bar).
However, this feels like a cumbersome solution; in particular if bar is a complex object.
I am interested in a best-practice approach the case described above.
Why don't you want to alter foo? If you import a module, you want to use its functionality. If the foo function is without parameters, then bar or other variables in it are used in the module utils itself. If you want to use a function with values that are not inside the module, then:
def foo(bar):
return bar+1
is totally acceptable.
EDIT:
// When you create class foo1, just set bar in the constructor.
class foo1:
def init(self, bar):
self.bar = bar
Image this situation:
import someModule
# now you want to use a function of this module
foo()
Maybe then there would be an error like: bar is not defined or whatever --> modules are not loosely coupled. Either make the function foo as you proposed with parameters (totally acceptable) or set the bar value via a constructor or setBar method.
I am interested in a best-practice approach the case described above
As you describe, bar is an argument to foo, and the best practice way to pass an argument to a function is to pass it as an argument to the function.
in utils.py:
def foo(bar):
return bar+1
And in your other script:
import utils
bar = 1
bar = utils.foo(bar)
print bar
This is the best practice approach. It follows the correct semantics. It is also testable:
import unittest
import utils
class MyTest(unittest.TestCase):
def setUp(self):
self.bar = 1
def test_bar(self):
self.assertEquals(2, utils.foo(self.bar))
Is it possible to call methods on a default object? To explain what I mean, here's an example:
There's Foo.py with a class:
class Foo:
def fooMethod():
print "doSomething"
fooObject = Foo()
And another pythonscript Bar.py:
from Foo import *
// what I can do now is:
fooObject.fooMethod()
// what I need is that this:
fooMethod()
// is automatically executed on fooObject if the method was not found in Bar.py
Is there any possibility to make this work? What I need is to set a "default"-object on which methods are executed if the method was not found.
This has been done in Python's random module. They use a simple and straight-forward solution:
class Foo:
def foo_method(self):
print "do something"
foo_object = Foo()
foo_method = foo_object.foo_method
It would also be possible to write some introspection code to automatically propagate all methods of Foo to the module namespace, but I recommend against doing so.