I am writing a python plugin for custom HTML report for pytest test results. I want to store some arbitrary test information (i.o. some python objects...) inside tests, and then when making report I want to reuse this information in the report. So far I have only came with a bit of hackish solution.
I pass request object to my test and fill the request.node._report_sections part of it with my data.
This object is then passed to TestReport.sections attribute, which is available via hook pytest_runtest_logreport, from which finally I can generate HTML and then I remove all my objects from sections attribute.
In pseudopythoncode:
def test_answer(request):
a = MyObject("Wooo")
request.node._report_sections.append(("call","myobj",a))
assert False
and
def pytest_runtest_logreport(report):
if report.when=="call":
#generate html from report.sections content
#clean report.sections list from MyObject objects
#(Which by the way contains 2-tuples, i.e. ("myobj",a))
Is there a better pytest way to do this?
This way seems OK.
Improvements I can suggest:
Think about using a fixture to create the MyObject object. Then you can place the request.node._report_sections.append(("call","myobj",a)) inside the fixture, and make it invisible in the test. Like this:
#pytest.fixture
def a(request):
a_ = MyObject("Wooo")
request.node._report_sections.append(("call","myobj",a_))
return a_
def test_answer(a):
...
Another idea, which is suitable in case you have this object in all of your tests, is to implement one of the hooks pytest_pycollect_makeitem or pytest_pyfunc_call, and "plant" the object there in the first place.
Related
I am adding some tests to existing not so test friendly code, as title suggest, I need to test if the complex method actually calls another method, eg.
class SomeView(...):
def verify_permission(self, ...):
# some logic to verify permission
...
def get(self, ...):
# some codes here I am not interested in this test case
...
if some condition:
self.verify_permission(...)
# some other codes here I am not interested in this test case
...
I need to write some test cases to verify self.verify_permission is called when condition is met.
Do I need to mock all the way to the point of where self.verify_permission is executed? Or I need to refactor the def get() function to abstract out the code to become more test friendly?
There are a number of points made in the comments that I strongly disagree with, but to your actual question first.
This is a very common scenario. The suggested approach with the standard library's unittest package is to utilize the Mock.assert_called... methods.
I added some fake logic to your example code, just so that we can actually test it.
code.py
class SomeView:
def verify_permission(self, arg: str) -> None:
# some logic to verify permission
print(self, f"verify_permission({arg=}=")
def get(self, arg: int) -> int:
# some codes here I am not interested in this test case
...
some_condition = True if arg % 2 == 0 else False
...
if some_condition:
self.verify_permission(str(arg))
# some other codes here I am not interested in this test case
...
return arg * 2
test.py
from unittest import TestCase
from unittest.mock import MagicMock, patch
from . import code
class SomeViewTestCase(TestCase):
def test_verify_permission(self) -> None:
...
#patch.object(code.SomeView, "verify_permission")
def test_get(self, mock_verify_permission: MagicMock) -> None:
obj = code.SomeView()
# Odd `arg`:
arg, expected_output = 3, 6
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_not_called()
# Even `arg`:
arg, expected_output = 2, 4
output = obj.get(arg)
self.assertEqual(expected_output, output)
mock_verify_permission.assert_called_once_with(str(arg))
You use a patch variant as a decorator to inject a MagicMock instance to replace the actual verify_permission method for the duration of the entire test method. In this example that method has no return value, just a side effect (the print). Thus, we just need to check if it was called under the correct conditions.
In the example, the condition depends directly on the arg passed to get, but this will obviously be different in your actual use case. But this can always be adapted. Since the fake example of get has exactly two branches, the test method calls it twice to traverse both of them.
When doing unit tests, you should always isolate the unit (i.e. function) under testing from all your other functions. That means, if your get method calls other methods of SomeView or any other functions you wrote yourself, those should be mocked out during test_get.
You want your test of get to be completely agnostic to the logic inside verify_permission or any other of your functions used inside get. Those are tested separately. You assume they work "as advertised" for the duration of test_get and by replacing them with Mock instances you control exactly how they behave in relation to get.
Note that the point about mocking out "network requests" and the like is completely unrelated. That is an entirely different but equally valid use of mocking.
Basically, you 1.) always mock your own functions and 2.) usually mock external/built-in functions with side effects (like e.g. network or disk I/O). That is it.
Also, writing tests for existing code absolutely has value. Of course it is better to write tests alongside your code. But sometimes you are just put in charge of maintaining a bunch of existing code that has no tests. If you want/can/are allowed to, you can refactor the existing code and write your tests in sync with that. But if not, it is still better to add tests retroactively than to have no tests at all for that code.
And if you write your unit tests properly, they still do their job, if you or someone else later decides to change something about the code. If the change breaks your tests, you'll notice.
As for the exception hack to interrupt the tested method early... Sure, if you want. It's lazy and calls into question the whole point of writing tests, but you do you.
No, seriously, that is a horrible approach. Why on earth would you test just part of a function? If you are already writing a test for it, you may as well cover it to the end. And if it is so complex that it has dozens of branches and/or calls 10 or 20 other custom functions, then yes, you should definitely refactor it.
I need a function to return predictable values when running tests.
For example, I have a function get_usd_rates(), which is loading USD forex rates from some API.
So far when I was using it inside any other function I was passing rates as an optional argument for testing purposes only. Seems hacky, but it worked. Like this:
def some_other_function(rates=None):
if rates is None:
rates = get_usd_rates()
# do something with rates
But now I am facing a situation where I can't pass extra argument to a function (private class method for django model, which is called on model field change).
Is there a way to make get_usd_rates() function aware that test is running and always return some predefined value without noticeable performance impact in this case?
Or what is the best way to deal with this problem.
What you need to do is mock the methods. This is a module present in the unittest module. Try using mock.patch:
from unittest.mock import patch
#patch('path.to.get_usd_rates')
def your_test_function(mock_get_usd_rates):
mock_get_usd_rates.return_value = "Some predefined value"
# Rest of your test (Anywhere that get_usd_rates is used will now automaticlly use mock_get_usd_rates)
What happens here is that mock.patch will replace your function get_usd_rates with a mock on which you set what you want the return value to be. There are various ways to do this other than a decorator (context manager for one, etc.) Reference: unittest.mock
I want to test my code that is based on the API created by someone else, but im not sure how should I do this.
I have created some function to save the json into file so I don't need to send requests each time I run test, but I don't know how to make it work in situation when the original (check) function takes an input arg (problem_report) which is an instance of some class provided by API and it has this
problem_report.get_correction(corr_link) method. I just wonder if this is a sign of bad written code by me, beacuse I can't write a test to this, or maybe I should rewrite this function in my tests file like I showed at the end of provided below code.
# I to want test this function
def check(problem_report):
corrections = {}
for corr_link, corr_id in problem_report.links.items():
if re.findall(pattern='detailCorrection', string=corr_link):
correction = problem_report.get_correction(corr_link)
corrections.update({corr_id: correction})
return corrections
# function serves to load json from file, normally it is downloaded by API from some page.
def load_pr(pr_id):
print('loading')
with open('{}{}_view_pr.json'.format(saved_prs_path, pr_id)) as view_pr:
view_pr = json.load(view_pr)
...
pr_info = {'view_pr': view_pr, ...}
return pr_info
# create an instance of class MyPR which takes json to __init__
#pytest.fixture
def setup_pr():
print('setup')
pr = load_pr('123')
my_pr = MyPR(pr['view_pr'])
return my_pr
# test function
def test_check(setup_pr):
pr = setup_pr
checked_pr = pr.check(setup_rft[1]['problem_report_pr'])
assert checker_pr
# rewritten check function in test file
#mock.patch('problem_report.get_correction', side_effect=get_corr)
def test_check(problem_report):
corrections = {}
for corr_link, corr_id in problem_report.links.items():
if re.findall(pattern='detailCorrection', string=corr_link):
correction = problem_report.get_correction(corr_link)
corrections.update({corr_id: correction})
return corrections
Im' not sure if I provided enough code and explanation to underastand the problem, but I hope so. I wish you could tell me if this is normal that some function are just hard to test, and if this is good practice to rewritte them separately so I can mock functions inside the tested function. I also was thinking that I could write new class with similar functionality but API is very large and it would be very long process.
I understand your question as follows: You have a function check that you consider hard to test because of its dependency on the problem_report. To make it better testable you have copied the code into the test file. You will test the copied code because you can modify this to be easier testable. And, you want to know if this approach makes sense.
The answer is no, this does not make sense. You are not testing the real function, but completely different code. Well, the code may not start being completely different, but in short time the copy and the original will deviate, and it will be a maintenance nightmare to ensure that the copy always resembles the original. Improving code for testability is a different story: You can make changes to the check function to improve its testability. But then, exactly the same resulting function should be used both in the test and the production code.
How to better test the function check then? First, are you sure that using the original problem_report objects really can not be sensibly used in your tests? (Here are some criteria that help you decide: What to mock for python test cases?). Now, lets assume that you come to the conclusion you can not sensibly use the original problem_report.
In that case, here the interface is simple enough to define a mocked problem_report. Keep in mind that Python uses duck typing, so you only have to create a class that has a links member which has an items() method. Plus, your mocked problem_report class needs a method get_correction(). Beyond that, your mock does not have to produce types that are similar to the types used by problem_report. The items() method can return simply a list of lists, like [["a",2],["xxxxdetailCorrectionxxxx",4]]. The same argument holds for get_correction, which could for example simply return its argument or a derived value, like, its negative.
For the above example (items() returning [["a",2],["xxxxdetailCorrectionxxxx",4]] and get_correction returning the negative of its argument) the expected result would be {4: -4}. No need to simulate real correction objects. And, you can create your mocked versions of problem_report without need to read data from files - the mocks can be setup completely from within the unit-testing code.
Try patching the problem_report symbol in the module. You should put your tests in a separate class.
#mock.patch('some.module.path.problem_report')
def test_check(problem_report):
problem_report.side_effect = get_corr
corrections = {}
for corr_link, corr_id in problem_report.links.items():
if re.findall(pattern='detailCorrection', string=corr_link):
correction = problem_report.get_correction(corr_link)
corrections.update({corr_id: correction})
return corrections
I am using the skip decorator for a test:
#skip('I want this to skip')
def test_abc(self):
I also have a nose plugin to report test results with a defined
def beforeTest(self, *args, **kwargs):
the test case test_abc is getting captured by the beforeTest method. How can I check for the decorator value in my beforeTest method?
I see that the definition of unittest decorator has the following code:
test_item.__unittest_skip__ = True
test_item.__unittest_skip_why__ = reason
But I dont know how to access it from beforeTest.
When running args[0].test has the test case object but I can seem to find where __unittest_skip__ is defined
Thanks!
Looking at the source code, there doesn't seem to be a clean way to do this. TestCase seems to know what method it is testing based on the _testMethodName implementation detail. If you have a reference to the running test case (maybe args[0].test? I'm not familiar with nose...) you could use that, or you could parse it out of the return value from TestCase.id(). Assuming you aren't doing something really funky, it would be something like:
test_name = test_case.id().rsplit('.', 1)[-1]
test_method = getattr(test_case, test_name)
if getattr(test_method, '__unittest_skip__', False):
# Method skipped. Don't do normal stuff.
In general I want to disable as little code as possible, and I want it to be explicit: I don't want the code being tested to decide whether it's a test or not, I want the test to tell that code "hey, BTW, I'm running a unit test, can you please not make your call to solr, instead can you please stick what you would send to solr in this spot so I can check it". I have my ideas but I don't like any of them, I am hoping that there's a good pythonic way to do this.
You can use Mock objects to intercept the method calls that you do not want to execute.
E.g. You have some class A, where you don't want method no() to be called during a test.
class A:
def do(self):
print('do')
def no(self):
print('no')
A mock object could inherit from A and override no() to do nothing.
class MockA(A):
def no(self):
pass
You would then create MockA objects instead of As in your test code. Another way to do mocking would be to have A and MockA implement a common interface say InterfaceA.
There are tons of mocking frameworks available. See StackOverflow: Python mocking frameworks.
In particular see: Google's Python mocking framework.
Use Michael Foord's Mock
in your unit test do this:
from mock import Mock
class Person(object):
def __init__(self, name):
super(Person, self).__init__()
self.name = name
def say(self, str):
print "%s says \"%s\"" % (self.name, str)
...
#In your unit test....
#create the class as normal
person = Person("Bob")
#now mock all of person's methods/attributes
person = Mock(spec=person)
#talkto is some function you are testing
talkTo(person)
#make sure the Person class's say method was called
self.assertTrue(person.say.called, "Person wasn't asked to talk")
#make sure the person said "Hello"
args = ("Hello")
keywargs = {}
self.assertEquals(person.say.call_args, (args, keywargs), "Person did not say hello")
The big problem that I was having was with the mechanics of the dependency injection. I have now figured that part out.
I need to import the module in the exact same way in both places to successfully inject the new code. For example, if I have the following code that I want to disable:
from foo_service.foo import solr
solr.add(spam)
I can't seem to do this in the in my test runner:
from foo import solr
solr = mock_object
The python interpreter must be treating the modules foo_service.foo and foo as different entries. I changed from foo import solr to the more explicit from foo_service.foo import solr and my mock object was successfully injected.
Typically when something like this arises you use Monkey Patching (also called Duck Punching) to achieve the desired results. Check out this link to learn more about Monkey Patching.
In this case, for example, you would overwrite solr to just print the output you are looking for.
You have two ways to do this is no ,or minimal in the case of DI, modifications to your source code
Dependency injection
Monkey patching
The cleanest way is using dependency injection, but I don't really like extensive monkeypatching, and there are some things that are non-possible/difficult to do that dependency injection makes easy.
I know it's the typical use case for mock objects, but that's also an old argument... are Mock objects necessary at all or are they evil ?
I'm on the side of those who believe mocks are evil and would try to avoid changing tested code at all. I even believe such need to modify tested code is a code smell...
If you wish to change or intercept an internal function call for testing purpose you could also make this function an explicit external dependency set at instanciation time that would be provided both by your production code and test code. If you do that the problem disappear and you end up with a cleaner interface.
Note that doing that there is not need to change the tested code at all neither internally nor by the test being performed.