Python: Using logging info in nose/unittest? - python

I have a test function that manipulates the internal state of an object. The object logs the following using logging.info().
INFO:root:_change: test light red
INFO:root:_change: test light green
INFO:root:_change: test light yellow
How can I incorporate this into a nose or unittest function so that I can have a test similar to this?
def test_thing():
expected_log_output = "INFO:root:_change: test light red\n" +\
"INFO:root:_change: test light green\n" +\
"INFO:root:_change: test light yellow\n"
run_thing()
assert actual_log_output matches expected_log_output

When it comes to testing my logging, what I usually do is mock out my logger and ensure it is called with the appropriate params. I typically do something like this:
class TestBackupInstantiation(TestCase):
#patch('core.backup.log')
def test_exception_raised_when_instantiating_class(self, m_log):
with self.assertRaises(IOError) as exc:
Backup(AFakeFactory())
assert_equal(m_log.error.call_count, 1)
assert_that(exc.exception, is_(IOError))
So you can even make a call where you can test to ensure what the logger is called with to validate the message.
I believe you can do something like:
m_log.error.assert_called_with("foo")
I might also add, when it comes to this kind of testing I love using test frameworks like flexmock and mock
Also, when it comes to validating matchers, py-hamcrest is awesome.

Related

How to customize the passed test message in a dynamic way

I might be guilty of using pytest in a way I'm not supposed to, but let's say I want to generate and display some short message on successful test completion. Like I'm developing a compression algorithm, and instead of "PASSED" I'd like to see "SQUEEZED BY 65.43%" or something like that. Is this even possible? Where should I start with the customization, or maybe there's a plugin I might use?
I've stumbled upon pytest-custom-report, but it provides only static messages, that are set up before tests run. That's not what I need.
I might be guilty of using pytest in a way I'm not supposed to
Not at all - this is exactly the kind of use cases the pytest plugin system is supposed to solve.
To answer your actual question: it's not clear where the percentage value comes from. Assuming it is returned by a function squeeze(), I would first store the percentage in the test, for example using the record_property fixture:
from mylib import squeeze
def test_spam(record_property):
value = squeeze()
record_property('x', value)
...
To display the stored percentage value, add a custom pytest_report_teststatus hookimpl in a conftest.py in your project or tests root directory:
# conftest.py
def pytest_report_teststatus(report, config):
if report.when == 'call' and report.passed:
percentage = dict(report.user_properties).get('x', float("nan"))
short_outcome = f'{percentage * 100}%'
long_outcome = f'SQUEEZED BY {percentage * 100}%'
return report.outcome, short_outcome, long_outcome
Now running test_spam in default output mode yields
test_spam.py 10.0% [100%]
Running in the verbose mode
test_spam.py::test_spam SQUEEZED BY 10.0% [100%]

How to test an endpoint exception using flask and pytest?

I have an endpoint that returns a list from my database. If something goes wrong along the way, I return an internal_server_error, which has 500 status_code and a message as a parameter.
def get_general_ranking():
try:
ranking_list = GamificationService.get_general_ranking()
return basic_response(ranking_list, 200)
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
I am implementing an unit test to this endpoint. So, right now, I have this implementation:
class TestGamificationController(unittest.TestCase):
def setUp(self):
"""
Function called when the class is initialized.
"""
test_app = app.test_client()
self.general_ranking = test_app.get('/v1/gamification/general_ranking')
def test_endpoint_general_ranking(self):
"""
Testing the endpoint '/v1/gamification/general_ranking'.
"""
assert self.general_ranking.status_code == 200, "Wrong status code."
assert len(self.general_ranking.json) > 0, "/v1/gamification/general_ranking is returning an empty list."
assert self.general_ranking.content_type == 'application/json', "Wrong content_type"
But, as you can see below, when I run the test with coverage to check if I am covering 100% of my code, I get 75%. The missing lines are the exception ones.
---------- coverage: platform darwin, python 3.8.0-final-0 -----------
Name Stmts Miss Cover Missing
------------------------------------------------------------------------
api/controller/GamificationController.py 16 4 75% 18-21
Missing lines:
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
How can I cover this exception too using pytest? Or should I use something else?
I see three possible fixes for this:
Create a custom route in your app that just raises said exception.
Programmatically add this custom route in your app when you start your tests.
Move your global error handler function to its own file and ignore it from your coverage.
Now personally 1 is the easiest with just applying a debug/dev environment check that just throws a route not found error if it's off.
2 is doable if you use your Flask app factory to generate an app and generate the custom route all on the test execution although I'm not sure if this is enough to pass the AssertionError that Flask throws if you make any modification to its app routes after the first request is received. Which is usually the case when you instantiate your app on your conftest.py.
3 is kind of cheating I suppose.
Hope you got this sorted out. Would like to know how you solved this.
Update: The best way I've found in doing this is by using the sessionstart hook of pytest that runs before any tests. I use this to initialize the custom error endpoints. This approach works best since you won't have to pollute the codebase with test-specific logic.

How to run the python unittest N number of times

I have a python unittest like below , I want to run this whole Test N Number of times
class Test(TestCase)
def test_0(self):
.........
.........
.........
Test.Run(name=__name__)
Any Suggestions?
You can use parameterized tests. There are different modules to do that. I use nose to run my unittests (more powerful than the default unittest module) and there's a package called nose-parameterized that allows you to write a factory test and run it a number of times with different values for variables you want.
If you don't want to use nose, there are several other options for running parameterized tests.
Alternatively you can execute any number of test conditions in a single test (as soon as one fails the test will report error). In your particular case, maybe this makes more sense than parameterized tests, because in reality it's only one test, only that it needs a large number of runs of the function to get to some level of confidence that it's working properly. So you can do:
import random
class Test(TestCase)
def test_myfunc(self):
for _ in range(100):
input = random.random()
self.assertEquals(input, input + 2)
Test.Run(name=__name__)
Why because... the test_0 method contains a random option.. so each time it runs it selects random number of configuration and tests against those configurations. so I am not testing the same thing multiple times..
Randomness in a test makes it non-reproducible. One day you might get 1 failure out of 100, and when you run it again, it’s already gone.
Use a modern testing tool to parametrize your test with a sequential number, then use random.seed to have a random but reproducible test case for each number in a sequence.
portusato suggests nose, but pytest is a more modern and popular tool:
import random, pytest
#pytest.mark.parametrize('i', range(100))
def test_random(i):
orig_state = random.getstate()
try:
random.seed(i)
data = generate_random_data()
assert my_algorithm(data) == works
finally:
random.setstate(orig_state)
pytest.mark.parametrize “explodes” your single test_random into 100 individual tests — test_random[0] through test_random[99]:
$ pytest -q test.py
....................................................................................................
100 passed in 0.14 seconds
Each of these tests generates different, random, but reproducible input data to your algorithm. If test_random[56] fails, it will fail every time, so you will then be able to debug it.
If you don't want your test to stop after the first failure, you can use subTest.
class Test(TestCase):
def test_0(self):
for i in [1, 2, 3]:
with self.subTest(i=i):
self.assertEqual(squared(i), i**2)
Docs

How to interact with the UI when testing an application written by kivy?

The application is written by kivy.
I want to test a function via pytest, but in order to test that function, I need to initalize the object first, but the object needs something from the UI when initalizing, but I am at testing phase, so don't know how to retrieve something from the UI.
This is the class which has an error and has been handled
class SaltConfig(GridLayout):
def check_phone_number_on_first_contact(self, button):
s = self.instanciate_ServerMsg(tt)
try:
s.send()
except HTTPError as err:
print("[HTTPError] : " + str(err.code))
return
# some code when running without error
def instanciate_ServerMsg():
return ServerMsg()
This is the helper class which generates the ServerMsg object used by the former class.
class ServerMsg(OrderedDict):
def send(self,answerCallback=None):
#send something to server via urllib.urlopen
This is my tests code:
class TestSaltConfig:
def test_check_phone_number_on_first_contact(self):
myError = HTTPError(url="http://127.0.0.1", code=500,
msg="HTTP Error Occurs", hdrs="donotknow", fp=None)
mockServerMsg = mock.Mock(spec=ServerMsg)
mockServerMsg.send.side_effect = myError
sc = SaltConfig(ds_config_file_missing.data_store)
def mockreturn():
return mockServerMsg
monkeypatch.setattr(sc, 'instanciate_ServerMsg', mockreturn)
sc.check_phone_number_on_first_contact()
I can't initialize the object, it will throw an AttributeError when initialzing since it needs some value from UI.
So I get stuck.
I tried to mock the object then patch the function to the original one, but won't work either since the function itself has has logic related to UI.
How to solve it? Thanks
I made an article about testing Kivy apps together with a simple runner - KivyUnitTest. It works with unittest, not with pytest, but it shouldn't be hard to rewrite it, so that it fits your needs. In the article I explain how to "penetrate" the main loop of UI and this way you can happily go and do with button this:
button = <button you found in widget tree>
button.dispatch('on_release')
and many more. Basically you can do anything with such a test and you don't need to test each function independently. I mean... it's a good practice, but sometimes (mainly when testing UI), you can't just rip the thing out and put it into a nice 50-line test.
This way you can do exactly the same thing as a casual user would do when using your app and therefore you can even catch issues you'd have trouble with when testing the casual way e.g. some weird/unexpected user behavior.
Here's the skeleton:
import unittest
import os
import sys
import time
import os.path as op
from functools import partial
from kivy.clock import Clock
# when you have a test in <root>/tests/test.py
main_path = op.dirname(op.dirname(op.abspath(__file__)))
sys.path.append(main_path)
from main import My
class Test(unittest.TestCase):
def pause(*args):
time.sleep(0.000001)
# main test function
def run_test(self, app, *args):
Clock.schedule_interval(self.pause, 0.000001)
# Do something
# Comment out if you are editing the test, it'll leave the
# Window opened.
app.stop()
def test_example(self):
app = My()
p = partial(self.run_test, app)
Clock.schedule_once(p, 0.000001)
app.run()
if __name__ == '__main__':
unittest.main()
However, as Tomas said, you should separate UI and logic when possible, or better said, when it's an efficient thing to do. You don't want to mock your whole big application just to test a single function that requires communication with UI.
Finally made it, just get things done, I think there must be a more elegant solution. The idea is simple, given the fact that all lines are just simply value assignment except the s.send() statement.
Then we just mock the original object, every time when some errors pop up in the testing phase (since the object lack some values from the UI), we mock it, we repeat this step until the testing method can finally test if the function can handle the HTTPError or not.
In this example, we only need to mock a PhoneNumber class which is lucky, but some times we may need to handle more, so obviously #KeyWeeUsr 's answer is an more ideal choice for the production environment. But I just list my thinking here for somebody who wants a quick solution.
#pytest.fixture
def myHTTPError(request):
"""
Generating HTTPError with the pass-in parameters
from pytest_generate_tests(metafunc)
"""
httpError = HTTPError(url="http://127.0.0.1", code=request.param,
msg="HTTP Error Occurs", hdrs="donotknow", fp=None)
return httpError
class TestSaltConfig:
def test_check_phone_number( self, myHTTPError, ds_config_file_missing ):
"""
Raise an HTTP 500 error, and invoke the original function with this error.
Test to see if it could pass, if it can't handle, the test will fail.
The function locates in configs.py, line 211
This test will run 2 times with different HTTP status code, 404 and 500
"""
# A setup class used to cover the runtime error
# since Mock object can't fake properties which create via __init__()
class PhoneNumber:
text = "610274598038"
# Mock the ServerMsg class, and apply the custom
# HTTPError to the send() method
mockServerMsg = mock.Mock(spec=ServerMsg)
mockServerMsg.send.side_effect = myHTTPError
# Mock the SaltConfig class and change some of its
# members to our custom one
mockSalt = mock.Mock(spec=SaltConfig)
mockSalt.phoneNumber = PhoneNumber()
mockSalt.instanciate_ServerMsg.return_value = mockServerMsg
mockSalt.dataStore = ds_config_file_missing.data_store
# Make the check_phone_number_on_first_contact()
# to refer the original function
mockSalt.check_phone_number = SaltConfig.check_phone_number
# Call the function to do the test
mockSalt.check_phone_number_on_first_contact(mockSalt, "button")

Skipping tests with green test runner in Python

At the moment I am using py.test to run the test and define skipped test as the following:
#pytest.mark.skipif(True, reason="blockchain.info support currently disabled")
class BlockChainBTCTestCase(CoinTestCase, unittest.TestCase):
...
#pytest.mark.skipif(is_slow_test_hostile(), reason="Running send + receive loop may take > 20 minutes")
def test_send_receive_external(self):
""" Test sending and receiving external transaction within the backend wallet.
Does green provide corresponding facilities if I want to migrate my tests to green?
Yes! Green supports unittest's built-in unittest.skipIf(condition, reason) function, as well as the rest of the skip functions and exceptions like skip(), skipUnless(), and SkipTest.
#unittest.skipIf(True, reason="Just skip all the tests in the test case.")
class MyTestCase(unittest.TestCase):
...
class MyOtherTestCase(unittest.TestCase):
#unittest.skipIf(stuff_is_slow(), reason="Stuff is slow right now.")
def test_fast_stuff(self):
"This is a great test if stuff is fast at the moment."
...
Note that this requires Python 2.7 or later.

Categories

Resources