Fault Injection: Logging during functional tests - python

I have a very small test case:
#pytest.mark.bar
def test_snapshot_missing_snapshot_capacity():
fault.clear_settings()
fault.load_settings("{0}/test/test_bar_snapshot_capacity_missing.json".format(constants.RECOVERY_DIR))
backup_ts = bar_snapshot.create_backup(cloud_storage_driver, cloud_storage_driver_md, hydra_client)
assert not backup_ts
where test_bar_snapshot_capacity_missing.json has:
{
"snapshot_capacity_missing": true
}
Basically I have injected fault here.
Now my code which I am testing is:
if fault.is_fault("snapshot_capacity_missing"):
log.error("One or more incomplete backup/s intended for deletion do not have snapshot capacity. \
Skipping deleting incomplete backup altogether.")
return None
I don't get log.error printed on console at all. Even if I add log.error before the statement it does not get printed. My test case pass though. Any special setting to be made so that log statements work for functional tests?

Related

How to customize the passed test message in a dynamic way

I might be guilty of using pytest in a way I'm not supposed to, but let's say I want to generate and display some short message on successful test completion. Like I'm developing a compression algorithm, and instead of "PASSED" I'd like to see "SQUEEZED BY 65.43%" or something like that. Is this even possible? Where should I start with the customization, or maybe there's a plugin I might use?
I've stumbled upon pytest-custom-report, but it provides only static messages, that are set up before tests run. That's not what I need.
I might be guilty of using pytest in a way I'm not supposed to
Not at all - this is exactly the kind of use cases the pytest plugin system is supposed to solve.
To answer your actual question: it's not clear where the percentage value comes from. Assuming it is returned by a function squeeze(), I would first store the percentage in the test, for example using the record_property fixture:
from mylib import squeeze
def test_spam(record_property):
value = squeeze()
record_property('x', value)
...
To display the stored percentage value, add a custom pytest_report_teststatus hookimpl in a conftest.py in your project or tests root directory:
# conftest.py
def pytest_report_teststatus(report, config):
if report.when == 'call' and report.passed:
percentage = dict(report.user_properties).get('x', float("nan"))
short_outcome = f'{percentage * 100}%'
long_outcome = f'SQUEEZED BY {percentage * 100}%'
return report.outcome, short_outcome, long_outcome
Now running test_spam in default output mode yields
test_spam.py 10.0% [100%]
Running in the verbose mode
test_spam.py::test_spam SQUEEZED BY 10.0% [100%]

How to test an endpoint exception using flask and pytest?

I have an endpoint that returns a list from my database. If something goes wrong along the way, I return an internal_server_error, which has 500 status_code and a message as a parameter.
def get_general_ranking():
try:
ranking_list = GamificationService.get_general_ranking()
return basic_response(ranking_list, 200)
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
I am implementing an unit test to this endpoint. So, right now, I have this implementation:
class TestGamificationController(unittest.TestCase):
def setUp(self):
"""
Function called when the class is initialized.
"""
test_app = app.test_client()
self.general_ranking = test_app.get('/v1/gamification/general_ranking')
def test_endpoint_general_ranking(self):
"""
Testing the endpoint '/v1/gamification/general_ranking'.
"""
assert self.general_ranking.status_code == 200, "Wrong status code."
assert len(self.general_ranking.json) > 0, "/v1/gamification/general_ranking is returning an empty list."
assert self.general_ranking.content_type == 'application/json', "Wrong content_type"
But, as you can see below, when I run the test with coverage to check if I am covering 100% of my code, I get 75%. The missing lines are the exception ones.
---------- coverage: platform darwin, python 3.8.0-final-0 -----------
Name Stmts Miss Cover Missing
------------------------------------------------------------------------
api/controller/GamificationController.py 16 4 75% 18-21
Missing lines:
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
How can I cover this exception too using pytest? Or should I use something else?
I see three possible fixes for this:
Create a custom route in your app that just raises said exception.
Programmatically add this custom route in your app when you start your tests.
Move your global error handler function to its own file and ignore it from your coverage.
Now personally 1 is the easiest with just applying a debug/dev environment check that just throws a route not found error if it's off.
2 is doable if you use your Flask app factory to generate an app and generate the custom route all on the test execution although I'm not sure if this is enough to pass the AssertionError that Flask throws if you make any modification to its app routes after the first request is received. Which is usually the case when you instantiate your app on your conftest.py.
3 is kind of cheating I suppose.
Hope you got this sorted out. Would like to know how you solved this.
Update: The best way I've found in doing this is by using the sessionstart hook of pytest that runs before any tests. I use this to initialize the custom error endpoints. This approach works best since you won't have to pollute the codebase with test-specific logic.

Proper way to deal with fixture data in Python

My program is producing natural language sentences.
I would like to test it properly by setting the random seed to a fix value and then:
producing expected results;
comparing generated sentences with expected results;
if they differ, asking the user if the generated sentences were actually the expected results, and in this case, updating the expected results.
I already met such systems in JS, so I am surprised for not finding it in Python. How do you deal with such situations?
There are many testing frameworks in Python, with two of the most popular being PyTest and Nose. PyTest tends to cover all the bases, but Nose has a lot of nice extras as well.
With nose, fixtures are covered early on in the docs. The example they give looks something like
def setup_func():
"set up test fixtures"
def teardown_func():
"tear down test fixtures"
#with_setup(setup_func, teardown_func)
def test():
"test ..."
In your case, with the manual review, you may need to build that logic directly into the test itself.
Edit with a more specific example
Building on the example from Nose, one way you could address this is by writing the test
from nose.tools import eq_
def setup_func():
"set your random seed"
def teardown_func():
"whatever teardown you need"
#with_setup(setup_func, teardown_func)
def test():
expected = "the correct answer"
actual = "make a prediction ..."
_eq(expected, actual, "prediction did not match!")
When you run your tests, if the model does not produce the correct output, the tests will fail with "prediction did not match!". In that case, you should go to your test file and update expected with the expected value. This procedure isn't as dynamic as typing it in at runtime, but it has the advantage of being easily versioned and controlled.
One drawback of asking the user to replace the expected answer is that the automated test can not be run automatically. Therefore, test frameworks do not allow reading from input.
I really wanted this feature, so my implementation looks like:
def compare_results(expected, results):
if not os.path.isfile(expected):
logging.warning("The expected file does not exist.")
elif filecmp.cmp(expected, results):
logging.debug("%s is accepted." % expected)
return
content = Path(results).read_text()
print("The test %s failed." % expected)
print("Should I accept the results?")
print(content)
while True:
try:
keep = input("[y/n]")
except OSError:
assert False, "The test failed. Run directly this file to accept the result"
if keep.lower() in ["y", "yes"]:
Path(expected).write_text(content)
break
elif keep.lower() in ["n", "no"]:
assert False, "The test failed and you did not accept the answer."
break
else:
print("Please answer by yes or no.")
def test_example_iot_root(setup):
...
compare_results(EXPECTED_DIR / "iot_root.test", tmp.name)
if __name__ == "__main__":
from inspect import getmembers, isfunction
def istest(o):
return isfunction(o[1]) and o[0].startswith("test")
[random.seed(1) and o[1](setup) for o in getmembers(sys.modules[__name__]) \
if istest(o)]
When I run directly this file, it asks me whether or not it should replace the expected results. When I run from pytest, input creates an OSError that allows to quit the loop. Definitely not perfect.

how to rewrite django test case to avoid unpredictable occasional failures

I have a test case that's written exactly like this
def test_material_search_name(self):
"""
Tests for `LIKE` condition in searches.
For both name and serial number.
"""
material_one = MaterialFactory(name="Eraenys Velinarys", serial_number="SB2341")
material_two = MaterialFactory(name="Nelaerla Velnaris", serial_number="TB7892")
response = self.client.get(reverse('material-search'), {'q': 'vel'})
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['count'], 2)
self.assertEqual(response.data['results'][0]['name'], material_one.name)
self.assertEqual(response.data['results'][1]['name'], material_two.name)
My error message is :
line 97, in test_material_search_name
self.assertEqual(response.data['results'][0]['name'], material_one.name)
AssertionError: 'Nelaerla Velnaris' != 'Eraenys Velinarys'
- Nelaerla Velnaris
+ Eraenys Velinarys
Then when i re-run without changing any code, it becomes successful.
This error happens occasionally not always.
I was wondering if there's a better way to achieve the objectives of the test case without having that weird failure once in a while.
The frequency this error occurs is around 1 every 50 times I run the test.
The typical test command I use :
python manage.py test app_name.tests --keepdb
Here are a few options:
Order the results you get back by name before doing the assertEquals
Collect all the names out of the results first and then for each name do self.assertIn(name, names)
Order the results the back end returns

Can i test all assertions of a test after the first assertion failed?

I have a list of different audioformats, to which a certain file should be converted. The conversion function i have written, should now convert the file and return information on success, the path to the newly created file or some failure information.
self.AUDIO_FORMATS = ({'format':'wav', 'samplerate':44100, 'bitdepth':16 },
{'format':'aac', 'samplerate':44100, 'bitdepth':16 },
{'format':'ogg', 'samplerate':44100, 'bitdepth':16 },
{'format':'mp3', 'samplerate':44100, 'bitdepth':16 } )
As one possible reason for one of the conversions failing is a missing library, or some bug or failure in such a library or my implementation of it, i want to test each of the conversions to have a list of passed and failed tests in the end, where the failed ones tell me exactly which conversion did cause the trouble. This is what i tried (a bit simplified):
def test_convert_to_formats(self):
for options in self.AUDIO_FORMATS:
created_file_path, errors = convert_audiofile(self.audiofile,options)
self.assertFalse( errors )
self.assertTrue( os.path.isfile(created_file_path),
Now this is, of course, aborting the test as soon as the first conversion fails. I could write a test function for each of the conversions. That would result in having to write a new test for each added format, where now i just have to add a new dictionary to my AUDIO_FORMATS tuple.
Instead of asserting, store the errors in an array. At the end of your iteration, assert that the errors array is empty and potentially dump the contents of the array as the assertion failure reason.
why not use try...except... ?
errors = []
for option in optionlist:
try:
assert_and_raise1(option)
assert_and_raise2(...)
except Exception, e:
errors.append("[%s] fail: %s"%(option, e))
for e in errors:
print e

Categories

Resources