I am attempting to mock a method and running into issues with mock actually overwriting it.
app/tests/test_file.py <- contains the the unit test, currently using:
#mock.patch('app.method', return_value='foo')
def test(self, thing):
...
do some stuff with app/main/server.py
and get its response, assert a few values
...
assert 'foo' is in return value of some stuff
The method being mocked is being called by another file that server.py is calling.
app/main/server.py <- what the unit test is actually interacting with
app/main/route.py <- where method being mocked is called
app/main/thing.py <- contains method to be mocked
This is with python 2.7 and each package has an init file. The parent folder (app) contains imports for every class and method. I've tried app.method which doesn't give problems, but doesnt work. I've tried thing.method, throws an error. I've tried app.main.thing.method which does nothing.
I've had success in this same test suite mocking an object and one of its methods, but that object is created and used directly in the server.py file. I'm wondering if it's because the method being called is so far down the chain. Mocking is pretty magical to me, especially in Python.
After more digging finally figured it out, will leave it up for any others that have problems (especially since it's not easily Google'able).
As #Gang specified the full path should work, however the module needs to be the module where the method is called, not the module where it is located, as this person points out. . Using #Gang example:
#mock.patch('app.main.route.method')
def test(self, mock_method):
mock_method.return_value = 'foo'
# call and assert
Related
I'm using pytest to patch the os.makedirs method for a test. In a particular test I wanted to add a side effect of an exception.
So I import the os object that I've imported in my script under test, patch it, and then set the side effect in my test:
from infrastructure.scripts.src.server_administrator import os
def mock_makedirs(self, mocker):
mock = MagicMock()
mocker.patch.object(os, "makedirs", return_value=mock)
return mock
def test_if_directory_exist_exception_is_not_raised(self, administrator, mock_makedirs):
mock_makedirs.side_effect = Exception("Directory already exists.")
with pytest.raises(Exception) as exception:
administrator.initialize_server()
assert exception.value == "Directory already exists."
The problem I ran into was that when the mock gets called in my script under test, the side effect no longer existed. While troubleshooting I stopped the tests in the debugger to look at the ID values for the mock I created and the mock that the patch should have set as the return value and found that they are different instances:
I'm still relatively new to some of the testing tools in python, so this may be me missing something in the documentation, but shouldn't the returned mock patched in here be the mock I created?? Am I patching it wrong?
UPDATE
I even adjusted the import style to grab makedirs directly to patch it:
def mock_makedirs(self, mocker):
mock = MagicMock()
mocker.patch("infrastructure.scripts.src.server_administrator.makedirs", return_value=mock)
return mock
And I still run into the same "different mocks" issue.
:facepalm:
I was patching incorrectly. I'm considering just deleting the whole question/answer, but I figured I'd leave it here in case someone runs into the same situation.
I'm defining the patch like this:
mocker.patch.object(os, "makedirs", return_value=mock)
Which would be a valid structure if I was patching the result a function/method. That is, what this patch is saying is "when you call the makedirs, return this.
What I actually want to do is return a mock in place of the method. In it's current form it makes sense that I see two different mocks because the patch logic is currently "replace makedirs with a new mock and then when that mock is called, return this other mock (the mock I made)"
What I really want is just:
mocker.patch.object(os, "makedirs", mock)
Where my third argument (in the patch.object form) is the mock module parameter (vs the named return_value parameter).
In retrospect, it's pretty obvious when I think about it which is why I'm considering deleting the question, but it's an easy enough trip-up that I'm going to leave it live for now.
Using Django 1.10 and python 3.5.1.
I'm trying to mock 'call_command' function to throw an exception. The problem is that seems like the moment it gets the 'side_effect' function - it keeps to it also for other tests. What am I doing wrong or how can I 'revert' the side_effect from that function?
In this example, after running one of the tests, all other tests that run afterwards will throw the same exception even if it's not supposed to throw exception in that test.
def test_run_migrations_raise_exception(self):
with mock.patch('django.core.management.call_command', return_value=None, side_effect=Exception('e message')):
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
call_command('run_data_migrations')
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
def test_run_migrations_raise_flow_exception(self):
with mock.patch('django.core.management.call_command', return_value=None, side_effect=FlowException(500, 'fe message', {'a': 1})):
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
call_command('run_data_migrations')
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
You should not patch a function that is in your module-local (i.e. Python's "global" - which is actually "module") namespace.
When in Python you do
from module.that import this
this becomes a variable on the module that contains the import statement. Any changes to "module.that.this" will affect the object pointed in the other module, but using only this will still reefer to the original object.
Perhaps your code is not exactly as you show us, or maybe "mock.pacth" can find out that the module-local call_command is pointing to django.core.management.call_command in the other module when it makes the patch - but not when reversing the patch. The fact is your module-local name call_command is being changed.
You can fix that by simply changing your code to not bind a module variable directly to the function you want to change:
from django.core import management
def test_run_migrations_raise_exception(self):
with mock.patch('django.core.management.call_command', return_value=None, side_effect=Exception('e message')):
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
management.call_command('run_data_migrations')
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
I hope you can understand that and solve this problem. Now, that said, this use of mock makes no sense at all: the idea of using mock is that some callable used indirectly by code you call within the code-block that applies the patch does not have the original effect - so the intermetiate code can run and be tested. You are calling directly the mock object - so it will have none of the original code - calling call_command('run_data_migrations') runs no code on your code base at all, and thus, there is nothing there to test. It just calls the mocked instance, and it will not change the status of anything that could be detected with check_migrations_called.
I've been searching on internet to find an example of using flexmock on python modules, but all doc's seem to be for object/class. I'm wondering if it's possible to mock some variables returned by a module. What if that module calls another module?
ex.
def function_inside_function(id, some_string):
test_log = {"id": id, "definition": some_string}
return test_log
def function1(id):
some_string = 'blah' + id # i am totally bs-ing here
log = function_inside_function(id, some_string)
return log
so now I want to test each function separately by using flexmock to mock some values
back then when doing the same thing with an object, I could do (say the object is assigned to be test_object)
flexmock(test_object).should_receive('some_func').and_return('some_value')
where some_func is being called inside that object
but when I try to do the same with a module, I kept getting
FlexmockError: <function function1 at some_address> does not have attribute function_inside_function
I want to know if it's possible to use flexmock on modules, and, if yes. how?
After a lot of research and trial n error, it turns out that I have to use sys.modules
say my module is imported from path.to.module, then the syntax would be
flexmock(sys.modules['path.to.module']).should_receive('function.of.object').and_return(response)
function.of.object is the function being called. For example, requests.get. using only get will not work.
response is the response you try to mock. in the requests.get example, the response would be a requests.Response(), and then you can use setattr to set the attributes if flexmock complains about it. (Is there a better way to do it?)
I want to test a function in python, but it relies on a module-level "private" function, that I don't want called, but I'm having trouble overriding/mocking it. Scenario:
module.py
_cmd(command, args):
# do something nasty
function_to_be_tested():
# do cool things
_cmd('rm', '-rf /')
return 1
test_module.py
import module
test_function():
assert module.function_to_be_tested() == 1
Ideally, in this test I dont want to call _cmd. I've looked at some other threads, and I've tried the following with no luck:
test_function():
def _cmd(command, args):
# do nothing
pass
module._cmd = _cmd
although checking module._cmd against _cmd doesn't give the correct reference. Using mock:
from mock import patch
def _cmd_mock(command, args):
# do nothing
pass
#patch('module._cmd', _cmd_mock)
test_function():
...
gives the correct reference when checking module._cmd, although `function_to_be_tested' still uses the original _cmd (as evidenced by it doing nasty things).
This is tricky because _cmd is a module-level function, and I dont want to move it into a module
[Disclaimer]
The synthetic example posted in this question works and the described issue become from specific implementation in production code. Maybe this question should be closed as off topic because the issue is not reproducible.
[Note] For impatient people Solution is at the end of the answer.
Anyway that question given to me a good point to thought: how we can patch a method reference when we cannot access to the variable where the reference is?
Lot of times I found some issue like this. There are lot of ways to meet that case and the commons are
Decorators: the instance we would like replace is passed as decorator argument or used in decorator static implementation
What we would like to patch is a default argument of a method
In both cases maybe refactor the code is the best way to play with that but what about if we are playing with some legacy code or the decorator is a third part decorator?
Ok, we have the back on the wall but we are using python and in python nothing is impossible. What we need is just the reference of the function/method to patch and instead of patching its reference we can patch the __code__: yes I'm speaking about patching the bytecode instead the function.
Get a real example. I'm using default parameter case that is simple, but it works either in decorator case.
def cmd(a):
print("ORIG {}".format(a))
def cmd_fake(a):
print("NEW {}".format(a))
def do_work(a, c=cmd):
c(a)
do_work("a")
cmd=cmd_fake
do_work("b")
Output:
ORIG a
ORIG b
Ok In this case we can test do_work by passing cmd_fake but there some cases where is impossible do it: for instance what about if we need to call something like that:
def what_the_hell():
list(map(lambda a:do_work(a), ["c","d"]))
what we can do is patch cmd.__code__ instead of _cmd by
cmd.__code__ = cmd_fake.__code__
So follow code
do_work("a")
what_the_hell()
cmd.__code__ = cmd_fake.__code__
do_work("b")
what_the_hell()
Give follow output:
ORIG a
ORIG c
ORIG d
NEW b
NEW c
NEW d
Moreover if we want to use a mock we can do it by add follow lines:
from unittest.mock import Mock, call
cmd_mock = Mock()
def cmd_mocker(a):
cmd_mock(a)
cmd.__code__=cmd_mocker.__code__
what_the_hell()
cmd_mock.assert_has_calls([call("c"),call("d")])
print("WORKS")
That print out
WORKS
Maybe I'm done... but OP still wait for a solution of his issue
from mock import patch, Mock
cmd_mock = Mock()
#A closure for grabbing the right function code
def cmd_mocker(a):
cmd_mock(a)
#patch.object(module._cmd,'__code__', new=cmd_mocker.__code__)
test_function():
...
Now I should say never use this trick unless you are with the back on the wall. Test should be simple to understand and to debug ... try to debug something like this and you will become mad!
I want to setup a test suite wherein I will read a json file in setup_class method and in that json file I will mention which tests should run and which tests should not run. So with this approach I can mention which test cases to run by altering the json file only and not touching the test suite.
But in the setup_class method when I try to do the following:-
class TestCPU:
testme=False
#classmethod
def setup_class(cls):
cls.test_core.testme=True
def test_core(self):
print 'Test CPU Core'
assert 1
Executing below command:-
nosetests -s -a testme
It gives following error:-
File "/home/murtuza/HWTestCert/testCPU.py", line 7, in setup_class
cls.test_core.testme=False
AttributeError: 'instancemethod' object has no attribute 'testme'
So, is it possible to set the attributes of test methods during setup_class?
The way it is defined, testme is a member of the TestCPU class, and the <unbound method TestCPU.test_core> has no idea about this attribute. You can inject nose attribute by using cls.test_core.__dict__['testme']=True. However, the attributes are checked before your setup_class method is called, so even though the attribute will be set, your test will be skipped. But you can certainly decorate your test with attributes on import, like this:
import unittest
class TestCPU(unittest.TestCase):
def test_core(self):
print 'Test CPU Core'
assert 1
TestCPU.test_core.__dict__['testme']=True
You may also want to try --pdb option to nosetests, it will bring out debugger on error so that you can dive in to see what is wrong. It is definitely my second favorite thing in life.
I am sure there are multiple ways to achieve this, but here is one way you can do it.
Inside your test class, create a method that reads in your JSON file and creates a global array for methods to be skipped for testing - skiptests
All you need to do now is use a setup decorator for every test case method in your suite. Within this decorator, check if the current function is in skiptests. If so, call nosetests' custom decorator nose.tools.nottest which is used to skip a test. Otherwise, return the function being tested.
To put it in code:
def setup_test_method1(func):
if func.__name__ not in skiptests:
return func
else:
return nose.tools.nottest(func)
#with_setup(setup_test_method1)
def test_method1():
pass
I have not tested this code, but I think we can invoke a decorator within another decorator. In which case this could work.