There two lines that are not being executed by django tests when they are called as self.assertRaises.
I am using: Python 3.6.9, Django 3, Coverage.
I have this class:
class AverageWeatherService:
subclasses = WeatherService.__subclasses__()
valid_services = {
subclass.service_key: subclass for subclass in subclasses
}
#classmethod
def _check_service(cls, one_service):
if one_service not in cls.valid_services:
logger.exception("Not valid service sent")
raise NotValidWeatherFormException("Not valid service sent")
And I have a local API that is up in my pc.
Then I wrote this test:
def test_integration_average_temp_services_error(self):
self.assertRaises
(
NotValidWeatherFormException,
AverageWeatherService()._check_service,
"MyFakeService",
)
And although the test is successful with assert raises properly used this test is not adding coverage but If I call this method in a wrong way like this one:
def test_integration_average_temp_services_error2(self):
self.assertRaises
(
NotValidWeatherFormException,
AverageWeatherService()._check_service("MyFakeService")
)
Then of course I get an error running the test because the exception is raised and not properly catched by assertRaises BUT It adds coverage. If I run this test wrongly I have my code 100% covered. If I use assertRaises as the first way these two lines are not being covered (According to coverage html).
logger.exception("Not valid service sent")
raise NotValidWeatherFormException("Not valid service sent")
Also If I execute the method as the first way, the logger exception is not shown in console and when I run tests as the second way I am able to visualize the logger.exception on the terminal.
Any ideas of what is going on?
Thanks in advance.
I could solve it.
This is the workaround:
def test_integration_average_temp_services_error(self):
with self.assertRaises(NotValidWeatherFormException):
AverageWeatherService()._check_service("MyFakeService")
Related
I have a pytest function as such:
def test_zork1_serial_number_error(zork1_unicode_error_serial):
"handles a serial code with a unicode error"
with pytest.raises(UnicodeDecodeError) as execinfo:
serial_code = zork1_unicode_error_serial.serial_code
assert serial_code == "XXXXXX"
The code that this hits is:
#property
def serial_code(self) -> str:
code_bytes = bytes(self.view[0x12:0x18])
try:
if code_bytes.count(b"\x00"):
print("111111111111")
return "XXXXXX"
return code_bytes.decode("ascii")
except UnicodeDecodeError:
print("222222222222")
return "XXXXXX"
The print statements were just there for me to validate that the appropriate path was being hit. When I run the test I get this:
zork1_unicode_error_serial = <zmachine.header.Header object at 0x10e320d60>
def test_zork1_serial_number_error(zork1_unicode_error_serial):
"handles a serial code with a unicode error"
with pytest.raises(UnicodeDecodeError) as execinfo:
serial_code = zork1_unicode_error_serial.serial_code
> assert serial_code == "XXXXXX"
E Failed: DID NOT RAISE <class 'UnicodeDecodeError'>
tests/header_test.py:42: Failed
------------------------------------------------------------------------------ Captured stdout setup ------------------------------------------------------------------------------
/Users/jnyman/AppDev/quendor/tests/../zcode/zork1-r15-sXXXXXX.z2
------------------------------------------------------------------------------ Captured stdout call -------------------------------------------------------------------------------
222222222222
Notice how the "222222222222" is captured in the standard output, thus the appropriate path is being hit and thus the exception is also clearly being generated. Yet Pytest is saying that this exception was not raised. (I have also tested this code manually as well to make sure the exception is being generated.)
I've also tried the path of instead "marking" the test as such, like this:
#pytest.mark.xfail(raises=UnicodeDecodeError)
def test_zork1_serial_number_error(zork1_unicode_error_serial):
...
And that passes. However, it also passes regardless of what exception I put in there. For example, if I do #pytest.mark.xfail(raises=IndexError) that also passes even though an IndexError is never raised.
I can't tell if this has something to do with the fact that what I'm testing is a property. Again, as can seen from the captured standard output, the appropriate code path is being executed and the exception is most definitely being raised. But perhaps the fact that my function is a property is causing an issue?
I have read this Python - test a property throws exception but that isn't using Pytest and it's unclear to me how to retrofit the thinking there. I also aware that perhaps throwing an exception in a property is not a good thing (referencing this: By design should a property getter ever throw an exception in python?) so maybe this test problem is pointing to a code smell. But I don't see an immediate to make this better without adding extra complication. And that still wouldn't explain why Pytest is not seeing the exception generated when it clearly is being generated.
I'm trying to do a very simple test where my function takes in one parameter, a path to a database, tries to connect to the database using pyodbc.connect(database path) and if the path is invalid raises an error. However, as my test is structured currently, I am unable to make the test pass (raise the error) when I pass the function being tested a bad path.
I've tried to pass the test using mocks and without mocks. I believe the correct way to go would be to use mocks, however I also could not get that to work (see code below). I've found questions that were similar to what issue I'm having, but I could not use the suggestions in those questions to get my test to work.
Function Being Tested
import pyodbc
# When function runs by itself (no test) pyodbc.InterfaceError is raised if database path is bad!
def connect_to_database(database):
try:
conn = pyodbc.connect(database)
return conn
except pyodbc.InterfaceError as err:
raise err
Tests
Without mocking
def test_invalid_path_to_database(self):
self.assertRaises(pyodbc.InterfaceError, connect_to_database, '')
With mocking
def setUp(self):
self.mock_conn= mock.patch('my_package.my_module.pyodbc.connect').start()
self.addCleanup(mock.patch.stopall)
def test_invalid_path_to_database(self):
self.mock_conn.side_effect = pyodbc.InterfaceError
self.assertRaises(pyodbc.InterfaceError, connect_to_database, '')
I expect when the passed database path to the function is not valid an error (in this case InterfaceError) should be raised and the test should pass. This is not happening as the test is structured.
I'm learning how to write unit tests using the unittest module and have jumped in to deep end with meta programming (I believe it's also known as monkey patching) but have a stack trace that is printing out during a failed assertion test.
<output cut for brevity>
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 203, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 135, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: SystemExit not raised
Seems like I should be getting this error as the test should fail but I would prefer it be a bit more presentable and exclude the whole stack trace along with the assertion error.
Here's the code that uses a context manager to check for SystemExit:
with self.assertRaises(SystemExit) as cm:
o_hue_user.getHueLoginAuthentication()
self.assertNotEqual(cm.exception.code, 0)
The getHueLoginAuthentication method does execute exit(1) upon realizing the user name or password are incorrect but I need to eliminate the stack trace being printed out.
BTW, I've searched this and other sites and cannot find an answer that seems to have a simple or complete solution.
Thanks!
Since I'm new to this forum, I'm not sure if this is the correct way to respond to the answer...
I can try to put in the key areas of code but I can't reveal too much code as I'm working for a financial institute and they have strict rules about sharing internal work.
To answer your question, this is the code that executes the exit():
authentication_fail_check = bool(re.search('Invalid username or password', r.text))
if (r.status_code != 200 or authentication_fail_check) :
self.o_logging_utility.logger.error("Hue Login failed...")
exit(1)
I use PyCharm to debug. The key point of this code is to return an unsuccessful execution so that I can stop execution if this error occurs. I don't think it's necessary to use a try block here but I would not think it would matter. What's your professional opinion?
When the assertion condition is met, I get a pass and no trace stack. All this appears to be telling me is, the assertion was not met. My assertion tests are working. All I want to do is get rid of the trace stack and just print the last line of: "AssertionError: SystemExit not raised"
How do I get rid of the stack trace and leave the last line of the output as feedback?
Thanks!
I meant to say thank you for the warm welcome as well Don.
BTW, I can post the test code as it is not a part of the main code base. This is my first meta programming (monkey patching) unit test and actually my second unit test. I'm still struggling with going through the trouble of building some code that will tell me I get a particular result that I know I will get anyway. In the example of a function that returns a false boolean for example, If I write code that says execute this code with these parameters and I know for a fact that these values will return false, then why build code that tells me it will return false?
I'm struggling with how to design good tests that won't tell me the blatantly obvious.
So far, all I have managed to do is use a unit test to tell me if, when I instantiate an object and execute a function, it tells me if the login was successful or not. I can change the inputs to cause it to fail. But I already know it will fail. If I understand unit tests correctly, when I test if a login is successful or not, it is more of an integration test rather than a unit test.
However, the problem is, this particular class that I'm testing gets its parameters from a configuration file and sets instance variables for the specific connection. In the test code, I have 2 sets of tests that represents a good login and a bad login. I know unit tests are more autonomous in that the function can be called with parameters and tested independently. However, this is not how this code works. So, I'm at a loss as to how to design an efficient and useful test.
This is the test code for the specific class:
import unittest
from HueUser import *
test_data = \
{
"bad_login": {"hue_protocol": "https",
"hue_server": "my.server.com",
"hue_service_port": "1111",
"hue_auth_url": "/accounts/login/?next=/",
"hue_user": "baduser",
"hue_pw": "badpassword"},
"good_login": {"hue_protocol": "https",
"hue_server": "my.server.com",
"hue_service_port": "1111",
"hue_auth_url": "/accounts/login/?next=/",
"hue_user": "mouser",
"hue_pw": "good password"}
}
def hue_test_template(*args):
def foo(self):
self.assert_hue_test(*args)
return foo
class TestHueUserAuthentication(unittest.TestCase):
def assert_hue_test(self,o_hue_user):
with self.assertRaises(SystemExit) as cm:
o_hue_user.getHueLoginAuthentication()
self.assertNotEqual(cm.exception.code, 0)
for behaviour, test_cases in test_data.items():
o_hue_user = HueUser()
for name in test_cases:
setattr(o_hue_user, name, test_cases[name])
test_name = "test_getHueLoginAuthentication_{0}".format(behaviour)
test_case = hue_test_template(o_hue_user)
setattr(TestHueUserAuthentication,test_name, test_case)
Let me know how to respond to Answers or if I should just edit my post???
Thanks!
Welcome to Stack Overflow, Robert. It would be really helpful if you included a full example so other people can help you find the problem.
With the information you've given, I would guess that getHueLoginAuthentication() isn't actually raising the error you think it is. Try using a debugger to follow what it's doing, or put a print statement in just before it calls exit().
Here's a full example that shows how assertRaises() works:
from unittest import TestCase
def foo():
exit(1)
def bar():
pass
class FooTest(TestCase):
def test_foo(self):
with self.assertRaises(SystemExit):
foo()
def test_bar(self):
with self.assertRaises(SystemExit):
bar()
Here's what happens when I run it:
$ python3.6 -m unittest scratch.py
F.
======================================================================
FAIL: test_bar (scratch.FooTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scratch.py", line 19, in test_bar
bar()
AssertionError: SystemExit not raised
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)
I'm starting to use Behave to implement some tests. I would like to replace some of my existing unittest (which are more feature tests). Some of these uses assertRaises to check that certain calls to the back-end service raise the errors they should. Is it possible to have something similar in Behave (or maybe rather Gherkin)?
The following unittest calls my backend service and as a guest has logged on, is not able to perform the admin task (do_admin_task). It should raise an exception.
def test_mycall(self):
service = myservice('guest', 'pwd')
self.assertRaises(NoPermission, service.do_admin_task, some_param)
In my feature file, how would I create my scenario? Like this?
scenario: test guest can't do an admin task
given I log on to my service as guest / pwd
when I try to perform my admin task
then it should fail saying NoPermission
I believe that this will already raise an exception in the when step, so won't even get to the then step.
One potential way I could imagine around this is to create a specific step that performs both of these steps and does the exception handling. If I however want to mock errors in lower level calls, then I would have to re-write many of these steps, which is exactly what I'm hoping to avoid by switching to Behave in the first place.
How should I approach this?
When thinking on the Gherkin level, the exception is an expected outcome of the when step. So the step definition should have a try block and store the result/exception in the context. The then step can check this result/exception then.
#When(u'I try to perform my admin task')
def step_impl(context):
try:
context.admintaskresult = myservice(context.user, context.pass)
context.admintaskexception = None
except Exception as ex:
context.admintaskresult = None
context.admintaskexception = ex
#Then(u'it should fail saying NoPermission')
def step_impl(context):
assert isinstance(context.admintaskexception, NoPermissionException)
I have a library management_utils.py that's something like:
path = global_settings.get_rdio_base_path()
if path == "":
raise PathRequiredError("Path is required...")
def some_keyword():
# keyword requires path to be set to some valid value
In my test case file I have something like:
***Settings***
Library management_utils
***Test Cases***
Smoke Test
some keyword
...
Is it possible to abort running these test cases if the management_utils setup fails? Basically I'd like to abort execution of these test cases if PathRequiredError was raised in management_utils.py.
When I run the tests, I see the error being raised but execution continues on.
I saw in the Robot documentation you can set ROBOT_EXIT_ON_FAILURE = True in your error class but this doesn't seem to work for this case. Also ideally I'd be able to do something more granular so that it only aborts the test cases that require this Library, not all test execution.
Thank you!
The problem is that the exception is raised during library loading, since it is in the top level of module. ROBOT_EXIT_ON_FAILURE only effects if the failure comes from a keyword.
Instead, do this:
def get_path():
path = global_settings.get_rdio_base_path()
if path == "":
raise PathRequiredError("Path is required...")
def some_keyword():
path = get_path()
...
Now the exception is raised inside a keyword, and the test execution will be stopped.
As for the other point, there's no way to abort just some tests using ROBOT_EXIT_ON_FAILURE.