How to test an endpoint exception using flask and pytest? - python

I have an endpoint that returns a list from my database. If something goes wrong along the way, I return an internal_server_error, which has 500 status_code and a message as a parameter.
def get_general_ranking():
try:
ranking_list = GamificationService.get_general_ranking()
return basic_response(ranking_list, 200)
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
I am implementing an unit test to this endpoint. So, right now, I have this implementation:
class TestGamificationController(unittest.TestCase):
def setUp(self):
"""
Function called when the class is initialized.
"""
test_app = app.test_client()
self.general_ranking = test_app.get('/v1/gamification/general_ranking')
def test_endpoint_general_ranking(self):
"""
Testing the endpoint '/v1/gamification/general_ranking'.
"""
assert self.general_ranking.status_code == 200, "Wrong status code."
assert len(self.general_ranking.json) > 0, "/v1/gamification/general_ranking is returning an empty list."
assert self.general_ranking.content_type == 'application/json', "Wrong content_type"
But, as you can see below, when I run the test with coverage to check if I am covering 100% of my code, I get 75%. The missing lines are the exception ones.
---------- coverage: platform darwin, python 3.8.0-final-0 -----------
Name Stmts Miss Cover Missing
------------------------------------------------------------------------
api/controller/GamificationController.py 16 4 75% 18-21
Missing lines:
except Exception as e:
logging.error(str(e))
cache.delete()
return internal_server_error_response('Could not get ranking. Check log for reasons.')
How can I cover this exception too using pytest? Or should I use something else?

I see three possible fixes for this:
Create a custom route in your app that just raises said exception.
Programmatically add this custom route in your app when you start your tests.
Move your global error handler function to its own file and ignore it from your coverage.
Now personally 1 is the easiest with just applying a debug/dev environment check that just throws a route not found error if it's off.
2 is doable if you use your Flask app factory to generate an app and generate the custom route all on the test execution although I'm not sure if this is enough to pass the AssertionError that Flask throws if you make any modification to its app routes after the first request is received. Which is usually the case when you instantiate your app on your conftest.py.
3 is kind of cheating I suppose.
Hope you got this sorted out. Would like to know how you solved this.
Update: The best way I've found in doing this is by using the sessionstart hook of pytest that runs before any tests. I use this to initialize the custom error endpoints. This approach works best since you won't have to pollute the codebase with test-specific logic.

Related

How to customize the passed test message in a dynamic way

I might be guilty of using pytest in a way I'm not supposed to, but let's say I want to generate and display some short message on successful test completion. Like I'm developing a compression algorithm, and instead of "PASSED" I'd like to see "SQUEEZED BY 65.43%" or something like that. Is this even possible? Where should I start with the customization, or maybe there's a plugin I might use?
I've stumbled upon pytest-custom-report, but it provides only static messages, that are set up before tests run. That's not what I need.
I might be guilty of using pytest in a way I'm not supposed to
Not at all - this is exactly the kind of use cases the pytest plugin system is supposed to solve.
To answer your actual question: it's not clear where the percentage value comes from. Assuming it is returned by a function squeeze(), I would first store the percentage in the test, for example using the record_property fixture:
from mylib import squeeze
def test_spam(record_property):
value = squeeze()
record_property('x', value)
...
To display the stored percentage value, add a custom pytest_report_teststatus hookimpl in a conftest.py in your project or tests root directory:
# conftest.py
def pytest_report_teststatus(report, config):
if report.when == 'call' and report.passed:
percentage = dict(report.user_properties).get('x', float("nan"))
short_outcome = f'{percentage * 100}%'
long_outcome = f'SQUEEZED BY {percentage * 100}%'
return report.outcome, short_outcome, long_outcome
Now running test_spam in default output mode yields
test_spam.py 10.0% [100%]
Running in the verbose mode
test_spam.py::test_spam SQUEEZED BY 10.0% [100%]

Cloudformation with Lambda Custom Resources error - Python

I have been looking at developing some custom resources via the use of Lambda from CloudFormation (CF) and have been looking at using the custom resource helper, but it started off ok then the CF stack took ages to create or delete. When I checked the cloud watch logs I noticed there was an error after running the create or cloud functions in my Lambda.
[7cfecd7b-69df-4408-ab12-a764a8bf674e][2021-02-07 12:41:12,535][ERROR] send(..) failed executing requests.put(..):
Formatting field not found in record: 'requestid'
I noticed some others had this issue but no resolution. I have used the generic code from the link below, my custom code works and completes but it looks like passing an update to CF. I looked through the crhelper.py the only reference I can find for 'requestid' is this :
logfmt = '[%(requestid)s][%(asctime)s][%(levelname)s] %(message)s \n'
mainlogger.handlers[0].setFormatter(logging.Formatter(logfmt))
return logging.LoggerAdapter(mainlogger, {'requestid': event['RequestId']})
Reference
To understand the error that you're having we need to look at a reproducible code example of what you're doing. Take into consideration that every time that you have some kind of error on a custom resource operation it may take ages to finished, as you have noticed.
But, there is a good alternative to the original custom resource helper that you're using, and, in my experience, this works very well while maintaining the code much simpler (thanks to a good level of abstraction) and follows the best practices. This is the
custom resource helper framework, as explained on this AWS blog.
You can find more details about the implementation on github here.
Basically, after downloading the needed dependencies and load them on your lambda (this depends on how you handle custom lambda dependencies), you can manage your custom resources operations like this:
from crhelper import CfnResource
import logging
logger = logging.getLogger(__name__)
# Initialise the helper
helper = CfnResource()
try:
## put here your initial code for every operation
pass
except Exception as e:
helper.init_failure(e)
#helper.create
def create(event, context):
logger.info("Got Create")
print('Here we are creating some cool stuff')
#helper.update
def update(event, context):
logger.info("Got Update")
print('Here you update the things you want')
#helper.delete
def delete(event, context):
logger.info("Got Delete")
print('Here you handle the delete operation')
# this will call the defined function according the
# cloudformation operation that is running (create, update or delete)
def handler(event, context):
helper(event, context)

Fault Injection: Logging during functional tests

I have a very small test case:
#pytest.mark.bar
def test_snapshot_missing_snapshot_capacity():
fault.clear_settings()
fault.load_settings("{0}/test/test_bar_snapshot_capacity_missing.json".format(constants.RECOVERY_DIR))
backup_ts = bar_snapshot.create_backup(cloud_storage_driver, cloud_storage_driver_md, hydra_client)
assert not backup_ts
where test_bar_snapshot_capacity_missing.json has:
{
"snapshot_capacity_missing": true
}
Basically I have injected fault here.
Now my code which I am testing is:
if fault.is_fault("snapshot_capacity_missing"):
log.error("One or more incomplete backup/s intended for deletion do not have snapshot capacity. \
Skipping deleting incomplete backup altogether.")
return None
I don't get log.error printed on console at all. Even if I add log.error before the statement it does not get printed. My test case pass though. Any special setting to be made so that log statements work for functional tests?

How to interact with the UI when testing an application written by kivy?

The application is written by kivy.
I want to test a function via pytest, but in order to test that function, I need to initalize the object first, but the object needs something from the UI when initalizing, but I am at testing phase, so don't know how to retrieve something from the UI.
This is the class which has an error and has been handled
class SaltConfig(GridLayout):
def check_phone_number_on_first_contact(self, button):
s = self.instanciate_ServerMsg(tt)
try:
s.send()
except HTTPError as err:
print("[HTTPError] : " + str(err.code))
return
# some code when running without error
def instanciate_ServerMsg():
return ServerMsg()
This is the helper class which generates the ServerMsg object used by the former class.
class ServerMsg(OrderedDict):
def send(self,answerCallback=None):
#send something to server via urllib.urlopen
This is my tests code:
class TestSaltConfig:
def test_check_phone_number_on_first_contact(self):
myError = HTTPError(url="http://127.0.0.1", code=500,
msg="HTTP Error Occurs", hdrs="donotknow", fp=None)
mockServerMsg = mock.Mock(spec=ServerMsg)
mockServerMsg.send.side_effect = myError
sc = SaltConfig(ds_config_file_missing.data_store)
def mockreturn():
return mockServerMsg
monkeypatch.setattr(sc, 'instanciate_ServerMsg', mockreturn)
sc.check_phone_number_on_first_contact()
I can't initialize the object, it will throw an AttributeError when initialzing since it needs some value from UI.
So I get stuck.
I tried to mock the object then patch the function to the original one, but won't work either since the function itself has has logic related to UI.
How to solve it? Thanks
I made an article about testing Kivy apps together with a simple runner - KivyUnitTest. It works with unittest, not with pytest, but it shouldn't be hard to rewrite it, so that it fits your needs. In the article I explain how to "penetrate" the main loop of UI and this way you can happily go and do with button this:
button = <button you found in widget tree>
button.dispatch('on_release')
and many more. Basically you can do anything with such a test and you don't need to test each function independently. I mean... it's a good practice, but sometimes (mainly when testing UI), you can't just rip the thing out and put it into a nice 50-line test.
This way you can do exactly the same thing as a casual user would do when using your app and therefore you can even catch issues you'd have trouble with when testing the casual way e.g. some weird/unexpected user behavior.
Here's the skeleton:
import unittest
import os
import sys
import time
import os.path as op
from functools import partial
from kivy.clock import Clock
# when you have a test in <root>/tests/test.py
main_path = op.dirname(op.dirname(op.abspath(__file__)))
sys.path.append(main_path)
from main import My
class Test(unittest.TestCase):
def pause(*args):
time.sleep(0.000001)
# main test function
def run_test(self, app, *args):
Clock.schedule_interval(self.pause, 0.000001)
# Do something
# Comment out if you are editing the test, it'll leave the
# Window opened.
app.stop()
def test_example(self):
app = My()
p = partial(self.run_test, app)
Clock.schedule_once(p, 0.000001)
app.run()
if __name__ == '__main__':
unittest.main()
However, as Tomas said, you should separate UI and logic when possible, or better said, when it's an efficient thing to do. You don't want to mock your whole big application just to test a single function that requires communication with UI.
Finally made it, just get things done, I think there must be a more elegant solution. The idea is simple, given the fact that all lines are just simply value assignment except the s.send() statement.
Then we just mock the original object, every time when some errors pop up in the testing phase (since the object lack some values from the UI), we mock it, we repeat this step until the testing method can finally test if the function can handle the HTTPError or not.
In this example, we only need to mock a PhoneNumber class which is lucky, but some times we may need to handle more, so obviously #KeyWeeUsr 's answer is an more ideal choice for the production environment. But I just list my thinking here for somebody who wants a quick solution.
#pytest.fixture
def myHTTPError(request):
"""
Generating HTTPError with the pass-in parameters
from pytest_generate_tests(metafunc)
"""
httpError = HTTPError(url="http://127.0.0.1", code=request.param,
msg="HTTP Error Occurs", hdrs="donotknow", fp=None)
return httpError
class TestSaltConfig:
def test_check_phone_number( self, myHTTPError, ds_config_file_missing ):
"""
Raise an HTTP 500 error, and invoke the original function with this error.
Test to see if it could pass, if it can't handle, the test will fail.
The function locates in configs.py, line 211
This test will run 2 times with different HTTP status code, 404 and 500
"""
# A setup class used to cover the runtime error
# since Mock object can't fake properties which create via __init__()
class PhoneNumber:
text = "610274598038"
# Mock the ServerMsg class, and apply the custom
# HTTPError to the send() method
mockServerMsg = mock.Mock(spec=ServerMsg)
mockServerMsg.send.side_effect = myHTTPError
# Mock the SaltConfig class and change some of its
# members to our custom one
mockSalt = mock.Mock(spec=SaltConfig)
mockSalt.phoneNumber = PhoneNumber()
mockSalt.instanciate_ServerMsg.return_value = mockServerMsg
mockSalt.dataStore = ds_config_file_missing.data_store
# Make the check_phone_number_on_first_contact()
# to refer the original function
mockSalt.check_phone_number = SaltConfig.check_phone_number
# Call the function to do the test
mockSalt.check_phone_number_on_first_contact(mockSalt, "button")

Python: Using logging info in nose/unittest?

I have a test function that manipulates the internal state of an object. The object logs the following using logging.info().
INFO:root:_change: test light red
INFO:root:_change: test light green
INFO:root:_change: test light yellow
How can I incorporate this into a nose or unittest function so that I can have a test similar to this?
def test_thing():
expected_log_output = "INFO:root:_change: test light red\n" +\
"INFO:root:_change: test light green\n" +\
"INFO:root:_change: test light yellow\n"
run_thing()
assert actual_log_output matches expected_log_output
When it comes to testing my logging, what I usually do is mock out my logger and ensure it is called with the appropriate params. I typically do something like this:
class TestBackupInstantiation(TestCase):
#patch('core.backup.log')
def test_exception_raised_when_instantiating_class(self, m_log):
with self.assertRaises(IOError) as exc:
Backup(AFakeFactory())
assert_equal(m_log.error.call_count, 1)
assert_that(exc.exception, is_(IOError))
So you can even make a call where you can test to ensure what the logger is called with to validate the message.
I believe you can do something like:
m_log.error.assert_called_with("foo")
I might also add, when it comes to this kind of testing I love using test frameworks like flexmock and mock
Also, when it comes to validating matchers, py-hamcrest is awesome.

Categories

Resources