Skipping tests with green test runner in Python - python

At the moment I am using py.test to run the test and define skipped test as the following:
#pytest.mark.skipif(True, reason="blockchain.info support currently disabled")
class BlockChainBTCTestCase(CoinTestCase, unittest.TestCase):
...
#pytest.mark.skipif(is_slow_test_hostile(), reason="Running send + receive loop may take > 20 minutes")
def test_send_receive_external(self):
""" Test sending and receiving external transaction within the backend wallet.
Does green provide corresponding facilities if I want to migrate my tests to green?

Yes! Green supports unittest's built-in unittest.skipIf(condition, reason) function, as well as the rest of the skip functions and exceptions like skip(), skipUnless(), and SkipTest.
#unittest.skipIf(True, reason="Just skip all the tests in the test case.")
class MyTestCase(unittest.TestCase):
...
class MyOtherTestCase(unittest.TestCase):
#unittest.skipIf(stuff_is_slow(), reason="Stuff is slow right now.")
def test_fast_stuff(self):
"This is a great test if stuff is fast at the moment."
...
Note that this requires Python 2.7 or later.

Related

How to customize the passed test message in a dynamic way

I might be guilty of using pytest in a way I'm not supposed to, but let's say I want to generate and display some short message on successful test completion. Like I'm developing a compression algorithm, and instead of "PASSED" I'd like to see "SQUEEZED BY 65.43%" or something like that. Is this even possible? Where should I start with the customization, or maybe there's a plugin I might use?
I've stumbled upon pytest-custom-report, but it provides only static messages, that are set up before tests run. That's not what I need.
I might be guilty of using pytest in a way I'm not supposed to
Not at all - this is exactly the kind of use cases the pytest plugin system is supposed to solve.
To answer your actual question: it's not clear where the percentage value comes from. Assuming it is returned by a function squeeze(), I would first store the percentage in the test, for example using the record_property fixture:
from mylib import squeeze
def test_spam(record_property):
value = squeeze()
record_property('x', value)
...
To display the stored percentage value, add a custom pytest_report_teststatus hookimpl in a conftest.py in your project or tests root directory:
# conftest.py
def pytest_report_teststatus(report, config):
if report.when == 'call' and report.passed:
percentage = dict(report.user_properties).get('x', float("nan"))
short_outcome = f'{percentage * 100}%'
long_outcome = f'SQUEEZED BY {percentage * 100}%'
return report.outcome, short_outcome, long_outcome
Now running test_spam in default output mode yields
test_spam.py 10.0% [100%]
Running in the verbose mode
test_spam.py::test_spam SQUEEZED BY 10.0% [100%]

How to test an endpoint exception using flask and pytest?

I have an endpoint that returns a list from my database. If something goes wrong along the way, I return an internal_server_error, which has 500 status_code and a message as a parameter.
def get_general_ranking():
try:
ranking_list = GamificationService.get_general_ranking()
return basic_response(ranking_list, 200)
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
I am implementing an unit test to this endpoint. So, right now, I have this implementation:
class TestGamificationController(unittest.TestCase):
def setUp(self):
"""
Function called when the class is initialized.
"""
test_app = app.test_client()
self.general_ranking = test_app.get('/v1/gamification/general_ranking')
def test_endpoint_general_ranking(self):
"""
Testing the endpoint '/v1/gamification/general_ranking'.
"""
assert self.general_ranking.status_code == 200, "Wrong status code."
assert len(self.general_ranking.json) > 0, "/v1/gamification/general_ranking is returning an empty list."
assert self.general_ranking.content_type == 'application/json', "Wrong content_type"
But, as you can see below, when I run the test with coverage to check if I am covering 100% of my code, I get 75%. The missing lines are the exception ones.
---------- coverage: platform darwin, python 3.8.0-final-0 -----------
Name Stmts Miss Cover Missing
------------------------------------------------------------------------
api/controller/GamificationController.py 16 4 75% 18-21
Missing lines:
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
How can I cover this exception too using pytest? Or should I use something else?
I see three possible fixes for this:
Create a custom route in your app that just raises said exception.
Programmatically add this custom route in your app when you start your tests.
Move your global error handler function to its own file and ignore it from your coverage.
Now personally 1 is the easiest with just applying a debug/dev environment check that just throws a route not found error if it's off.
2 is doable if you use your Flask app factory to generate an app and generate the custom route all on the test execution although I'm not sure if this is enough to pass the AssertionError that Flask throws if you make any modification to its app routes after the first request is received. Which is usually the case when you instantiate your app on your conftest.py.
3 is kind of cheating I suppose.
Hope you got this sorted out. Would like to know how you solved this.
Update: The best way I've found in doing this is by using the sessionstart hook of pytest that runs before any tests. I use this to initialize the custom error endpoints. This approach works best since you won't have to pollute the codebase with test-specific logic.

Fault Injection: Logging during functional tests

I have a very small test case:
#pytest.mark.bar
def test_snapshot_missing_snapshot_capacity():
fault.clear_settings()
fault.load_settings("{0}/test/test_bar_snapshot_capacity_missing.json".format(constants.RECOVERY_DIR))
backup_ts = bar_snapshot.create_backup(cloud_storage_driver, cloud_storage_driver_md, hydra_client)
assert not backup_ts
where test_bar_snapshot_capacity_missing.json has:
{
"snapshot_capacity_missing": true
}
Basically I have injected fault here.
Now my code which I am testing is:
if fault.is_fault("snapshot_capacity_missing"):
log.error("One or more incomplete backup/s intended for deletion do not have snapshot capacity. \
Skipping deleting incomplete backup altogether.")
return None
I don't get log.error printed on console at all. Even if I add log.error before the statement it does not get printed. My test case pass though. Any special setting to be made so that log statements work for functional tests?

How to run the python unittest N number of times

I have a python unittest like below , I want to run this whole Test N Number of times
class Test(TestCase)
def test_0(self):
.........
.........
.........
Test.Run(name=__name__)
Any Suggestions?
You can use parameterized tests. There are different modules to do that. I use nose to run my unittests (more powerful than the default unittest module) and there's a package called nose-parameterized that allows you to write a factory test and run it a number of times with different values for variables you want.
If you don't want to use nose, there are several other options for running parameterized tests.
Alternatively you can execute any number of test conditions in a single test (as soon as one fails the test will report error). In your particular case, maybe this makes more sense than parameterized tests, because in reality it's only one test, only that it needs a large number of runs of the function to get to some level of confidence that it's working properly. So you can do:
import random
class Test(TestCase)
def test_myfunc(self):
for _ in range(100):
input = random.random()
self.assertEquals(input, input + 2)
Test.Run(name=__name__)
Why because... the test_0 method contains a random option.. so each time it runs it selects random number of configuration and tests against those configurations. so I am not testing the same thing multiple times..
Randomness in a test makes it non-reproducible. One day you might get 1 failure out of 100, and when you run it again, it’s already gone.
Use a modern testing tool to parametrize your test with a sequential number, then use random.seed to have a random but reproducible test case for each number in a sequence.
portusato suggests nose, but pytest is a more modern and popular tool:
import random, pytest
#pytest.mark.parametrize('i', range(100))
def test_random(i):
orig_state = random.getstate()
try:
random.seed(i)
data = generate_random_data()
assert my_algorithm(data) == works
finally:
random.setstate(orig_state)
pytest.mark.parametrize “explodes” your single test_random into 100 individual tests — test_random[0] through test_random[99]:
$ pytest -q test.py
....................................................................................................
100 passed in 0.14 seconds
Each of these tests generates different, random, but reproducible input data to your algorithm. If test_random[56] fails, it will fail every time, so you will then be able to debug it.
If you don't want your test to stop after the first failure, you can use subTest.
class Test(TestCase):
def test_0(self):
for i in [1, 2, 3]:
with self.subTest(i=i):
self.assertEqual(squared(i), i**2)
Docs

Python: Using logging info in nose/unittest?

I have a test function that manipulates the internal state of an object. The object logs the following using logging.info().
INFO:root:_change: test light red
INFO:root:_change: test light green
INFO:root:_change: test light yellow
How can I incorporate this into a nose or unittest function so that I can have a test similar to this?
def test_thing():
expected_log_output = "INFO:root:_change: test light red\n" +\
"INFO:root:_change: test light green\n" +\
"INFO:root:_change: test light yellow\n"
run_thing()
assert actual_log_output matches expected_log_output
When it comes to testing my logging, what I usually do is mock out my logger and ensure it is called with the appropriate params. I typically do something like this:
class TestBackupInstantiation(TestCase):
#patch('core.backup.log')
def test_exception_raised_when_instantiating_class(self, m_log):
with self.assertRaises(IOError) as exc:
Backup(AFakeFactory())
assert_equal(m_log.error.call_count, 1)
assert_that(exc.exception, is_(IOError))
So you can even make a call where you can test to ensure what the logger is called with to validate the message.
I believe you can do something like:
m_log.error.assert_called_with("foo")
I might also add, when it comes to this kind of testing I love using test frameworks like flexmock and mock
Also, when it comes to validating matchers, py-hamcrest is awesome.

Categories

Resources