Python 3.x UnitTest module disable SystemExit stack trace output - python

I'm learning how to write unit tests using the unittest module and have jumped in to deep end with meta programming (I believe it's also known as monkey patching) but have a stack trace that is printing out during a failed assertion test.
<output cut for brevity>
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 203, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 135, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: SystemExit not raised
Seems like I should be getting this error as the test should fail but I would prefer it be a bit more presentable and exclude the whole stack trace along with the assertion error.
Here's the code that uses a context manager to check for SystemExit:
with self.assertRaises(SystemExit) as cm:
o_hue_user.getHueLoginAuthentication()
self.assertNotEqual(cm.exception.code, 0)
The getHueLoginAuthentication method does execute exit(1) upon realizing the user name or password are incorrect but I need to eliminate the stack trace being printed out.
BTW, I've searched this and other sites and cannot find an answer that seems to have a simple or complete solution.
Thanks!
Since I'm new to this forum, I'm not sure if this is the correct way to respond to the answer...
I can try to put in the key areas of code but I can't reveal too much code as I'm working for a financial institute and they have strict rules about sharing internal work.
To answer your question, this is the code that executes the exit():
authentication_fail_check = bool(re.search('Invalid username or password', r.text))
if (r.status_code != 200 or authentication_fail_check) :
self.o_logging_utility.logger.error("Hue Login failed...")
exit(1)
I use PyCharm to debug. The key point of this code is to return an unsuccessful execution so that I can stop execution if this error occurs. I don't think it's necessary to use a try block here but I would not think it would matter. What's your professional opinion?
When the assertion condition is met, I get a pass and no trace stack. All this appears to be telling me is, the assertion was not met. My assertion tests are working. All I want to do is get rid of the trace stack and just print the last line of: "AssertionError: SystemExit not raised"
How do I get rid of the stack trace and leave the last line of the output as feedback?
Thanks!
I meant to say thank you for the warm welcome as well Don.
BTW, I can post the test code as it is not a part of the main code base. This is my first meta programming (monkey patching) unit test and actually my second unit test. I'm still struggling with going through the trouble of building some code that will tell me I get a particular result that I know I will get anyway. In the example of a function that returns a false boolean for example, If I write code that says execute this code with these parameters and I know for a fact that these values will return false, then why build code that tells me it will return false?
I'm struggling with how to design good tests that won't tell me the blatantly obvious.
So far, all I have managed to do is use a unit test to tell me if, when I instantiate an object and execute a function, it tells me if the login was successful or not. I can change the inputs to cause it to fail. But I already know it will fail. If I understand unit tests correctly, when I test if a login is successful or not, it is more of an integration test rather than a unit test.
However, the problem is, this particular class that I'm testing gets its parameters from a configuration file and sets instance variables for the specific connection. In the test code, I have 2 sets of tests that represents a good login and a bad login. I know unit tests are more autonomous in that the function can be called with parameters and tested independently. However, this is not how this code works. So, I'm at a loss as to how to design an efficient and useful test.
This is the test code for the specific class:
import unittest
from HueUser import *
test_data = \
{
"bad_login": {"hue_protocol": "https",
"hue_server": "my.server.com",
"hue_service_port": "1111",
"hue_auth_url": "/accounts/login/?next=/",
"hue_user": "baduser",
"hue_pw": "badpassword"},
"good_login": {"hue_protocol": "https",
"hue_server": "my.server.com",
"hue_service_port": "1111",
"hue_auth_url": "/accounts/login/?next=/",
"hue_user": "mouser",
"hue_pw": "good password"}
}
def hue_test_template(*args):
def foo(self):
self.assert_hue_test(*args)
return foo
class TestHueUserAuthentication(unittest.TestCase):
def assert_hue_test(self,o_hue_user):
with self.assertRaises(SystemExit) as cm:
o_hue_user.getHueLoginAuthentication()
self.assertNotEqual(cm.exception.code, 0)
for behaviour, test_cases in test_data.items():
o_hue_user = HueUser()
for name in test_cases:
setattr(o_hue_user, name, test_cases[name])
test_name = "test_getHueLoginAuthentication_{0}".format(behaviour)
test_case = hue_test_template(o_hue_user)
setattr(TestHueUserAuthentication,test_name, test_case)
Let me know how to respond to Answers or if I should just edit my post???
Thanks!

Welcome to Stack Overflow, Robert. It would be really helpful if you included a full example so other people can help you find the problem.
With the information you've given, I would guess that getHueLoginAuthentication() isn't actually raising the error you think it is. Try using a debugger to follow what it's doing, or put a print statement in just before it calls exit().
Here's a full example that shows how assertRaises() works:
from unittest import TestCase
def foo():
exit(1)
def bar():
pass
class FooTest(TestCase):
def test_foo(self):
with self.assertRaises(SystemExit):
foo()
def test_bar(self):
with self.assertRaises(SystemExit):
bar()
Here's what happens when I run it:
$ python3.6 -m unittest scratch.py
F.
======================================================================
FAIL: test_bar (scratch.FooTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scratch.py", line 19, in test_bar
bar()
AssertionError: SystemExit not raised
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)

Related

To print or not to print in pytest

When talking about pytest we know two things:
When a test pass, no output is given in principle
Sometimes the assertion failures can have very cryptic messages.
I took a course that solved this by using print to clarify desired outputs and calling the pytest as pytest -v -s. I think it is a great solution
Another developer in my company thinks that test code should be as free of "side effects" as possible (and considers prints as side effect). He suggests outputting to a file which I think it is not a good practice. (I think that is an undesirable side effect)
So I would like to hear about this from other developers.
How do you solve the two points given in the beginning and do you use prints in your tests?
As someone already pointed out you can provide your own assert message:
def test_something():
i = 2
assert i == 1, "i should be equal to one"
There should be really no difference between using assert messages and prints, but in case of an assert message only it would be visible in pytest report, and not all the stdout calls:
In this case 0-9 would be printed in pytest report
def test_something():
i = 2
for i in range(10):
print(i)
assert i == 1
Logging everything to a file would definitely make working with pytest harder, and would be a pain to debug if your tests fail in CI.
If you need descriptive messages I would prefer using assert messages and, maybe, prints for debug information.
using print() in your test is not a good solution, you need to see data in cli or on a pipeline.
so for assertion you can share custom messages for assertion in pass or fail case or even in raise an exception
here is a basic tutorial for this
https://docs.pytest.org/en/7.1.x/how-to/assert.html
for general test steps the best way to get the info is logging
logging on different levels
import logging as logger
logger.info('what info you want to share')
logger.error('what info you want to share')
logger.debug('what info you want to share')
for more info you can check this
https://docs.python.org/3/howto/logging.html

Catching Exception in a Property Function with Pytest

I have a pytest function as such:
def test_zork1_serial_number_error(zork1_unicode_error_serial):
"handles a serial code with a unicode error"
with pytest.raises(UnicodeDecodeError) as execinfo:
serial_code = zork1_unicode_error_serial.serial_code
assert serial_code == "XXXXXX"
The code that this hits is:
#property
def serial_code(self) -> str:
code_bytes = bytes(self.view[0x12:0x18])
try:
if code_bytes.count(b"\x00"):
print("111111111111")
return "XXXXXX"
return code_bytes.decode("ascii")
except UnicodeDecodeError:
print("222222222222")
return "XXXXXX"
The print statements were just there for me to validate that the appropriate path was being hit. When I run the test I get this:
zork1_unicode_error_serial = <zmachine.header.Header object at 0x10e320d60>
def test_zork1_serial_number_error(zork1_unicode_error_serial):
"handles a serial code with a unicode error"
with pytest.raises(UnicodeDecodeError) as execinfo:
serial_code = zork1_unicode_error_serial.serial_code
> assert serial_code == "XXXXXX"
E Failed: DID NOT RAISE <class 'UnicodeDecodeError'>
tests/header_test.py:42: Failed
------------------------------------------------------------------------------ Captured stdout setup ------------------------------------------------------------------------------
/Users/jnyman/AppDev/quendor/tests/../zcode/zork1-r15-sXXXXXX.z2
------------------------------------------------------------------------------ Captured stdout call -------------------------------------------------------------------------------
222222222222
Notice how the "222222222222" is captured in the standard output, thus the appropriate path is being hit and thus the exception is also clearly being generated. Yet Pytest is saying that this exception was not raised. (I have also tested this code manually as well to make sure the exception is being generated.)
I've also tried the path of instead "marking" the test as such, like this:
#pytest.mark.xfail(raises=UnicodeDecodeError)
def test_zork1_serial_number_error(zork1_unicode_error_serial):
...
And that passes. However, it also passes regardless of what exception I put in there. For example, if I do #pytest.mark.xfail(raises=IndexError) that also passes even though an IndexError is never raised.
I can't tell if this has something to do with the fact that what I'm testing is a property. Again, as can seen from the captured standard output, the appropriate code path is being executed and the exception is most definitely being raised. But perhaps the fact that my function is a property is causing an issue?
I have read this Python - test a property throws exception but that isn't using Pytest and it's unclear to me how to retrofit the thinking there. I also aware that perhaps throwing an exception in a property is not a good thing (referencing this: By design should a property getter ever throw an exception in python?) so maybe this test problem is pointing to a code smell. But I don't see an immediate to make this better without adding extra complication. And that still wouldn't explain why Pytest is not seeing the exception generated when it clearly is being generated.

Django test does not add coverage with AssertRaises

There two lines that are not being executed by django tests when they are called as self.assertRaises.
I am using: Python 3.6.9, Django 3, Coverage.
I have this class:
class AverageWeatherService:
subclasses = WeatherService.__subclasses__()
valid_services = {
subclass.service_key: subclass for subclass in subclasses
}
#classmethod
def _check_service(cls, one_service):
if one_service not in cls.valid_services:
logger.exception("Not valid service sent")
raise NotValidWeatherFormException("Not valid service sent")
And I have a local API that is up in my pc.
Then I wrote this test:
def test_integration_average_temp_services_error(self):
self.assertRaises
(
NotValidWeatherFormException,
AverageWeatherService()._check_service,
"MyFakeService",
)
And although the test is successful with assert raises properly used this test is not adding coverage but If I call this method in a wrong way like this one:
def test_integration_average_temp_services_error2(self):
self.assertRaises
(
NotValidWeatherFormException,
AverageWeatherService()._check_service("MyFakeService")
)
Then of course I get an error running the test because the exception is raised and not properly catched by assertRaises BUT It adds coverage. If I run this test wrongly I have my code 100% covered. If I use assertRaises as the first way these two lines are not being covered (According to coverage html).
logger.exception("Not valid service sent")
raise NotValidWeatherFormException("Not valid service sent")
Also If I execute the method as the first way, the logger exception is not shown in console and when I run tests as the second way I am able to visualize the logger.exception on the terminal.
Any ideas of what is going on?
Thanks in advance.
I could solve it.
This is the workaround:
def test_integration_average_temp_services_error(self):
with self.assertRaises(NotValidWeatherFormException):
AverageWeatherService()._check_service("MyFakeService")

Is there a way for pytest to check if a log entry was made at Error level or higher?

Python 3.8.0, pytest 5.3.2, logging 0.5.1.2.
My code has an input loop, and to prevent the program crashing entirely, I catch any exceptions that get thrown, log them as critical, reset the program state, and keep going. That means that a test that causes such an exception won't outright fail, so long as the output is still what is expected. This might happen if the error was a side effect of the test code but didn't affect the main tested logic. I would still like to know that the test is exposing an error-causing bug however.
Most of the Googling I have done shows results on how to display logs within pytest, which I am doing, but I can't find out if there is a way to expose the logs within the test, such that I can fail any test with a log at Error or Critical level.
Edit: This is a minimal example of a failing attempt:
test.py:
import subject
import logging
import pytest
#pytest.fixture(autouse=True)
def no_log_errors(caplog):
yield # Run in teardown
print(caplog.records)
# caplog.set_level(logging.INFO)
errors = [record for record in caplog.records if record.levelno >= logging.ERROR]
assert not errors
def test_main():
subject.main()
# assert False
subject.py:
import logging
logger = logging.Logger('s')
def main():
logger.critical("log critical")
Running python3 -m pytest test.py passes with no errors.
Uncommenting the assert statement fails the test without errors, and prints [] to stdout, and log critical to stderr.
Edit 2:
I found why this fails. From the documentation on caplog:
The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains only setup logs, same with the call and teardown phases
However, right underneath is what I should have found the first time:
To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and call stages during teardown like so:
#pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.levelno == logging.WARNING
]
if messages:
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)
However this still doesn't make a difference, and print(caplog.get_records("call")) still returns []
You can build something like this using the caplog fixture
here's some sample code from the docs which does some assertions based on the levels:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
since the records are the standard logging record types, you can use whatever you need there
here's one way you might do this ~more automatically using an autouse fixture:
#pytest.fixture(autouse=True)
def no_logs_gte_error(caplog):
yield
errors = [record for record in caplog.get_records('call') if record.levelno >= logging.ERROR]
assert not errors
(disclaimer: I'm a core dev on pytest)
You can use the unittest.mock module (even if using pytest) and monkey-patch whatever function / method you use for logging. Then in your test, you can have some assert that fails if, say, logging.error was called.
That'd be a short term solution. But it might also be the case that your design could benefit from more separation, so that you can easily test your application without a zealous try ... except block catching / suppressing just about everything.

How to decide where Python debugger stops and which line is to be blamed?

Background:
I write Squish GUI tests in Python. I tried to make test code as Pythonic and DRY as I could and hence I moved all repeating code to separate classes / modules.
Problem definition:
test.verify or assert statement tells the debugger to stop at the very line where the statement is and that's in most cases the module with details of single test step. This line is shown in eclipse during manual run and output by automatic test in Jenkins.
To actually see what failed in test it would be far better to stop the debugger at the invocation point of the procedures with asserts inside. Then tester / GUI developer can spot what actions on GUI lead to a problem and what was checked.
Example:
test_abstract.py
class App():
def open_file(self, filename):
pass # example
def test_file_content(content):
# squish magic to get file content from textbox etc.
# ...
test.verify(content in textBoxText)
test_file_opening.py
def main():
app = App()
app.open_file('filename.txt')
app.test_file_content('lorem')
As the test fails on test.verify() invocation the debugger stops and directs to test_abstract.py file. It actually say nothing about test steps that lead to this test failure.
Is there a way to tell debugger to ignore direct place of test failure and make it show where the procedure with test was invoked. I'm looking for elegant way that would not need too much of code in generic test file itself.
Not ideal solution which works:
For now I'm not using test.verify inside of abstract modules and invoke this in the particular test case code. Generalized test functions return a tuple (test_result, test_descriptive_message_with error) which is unpacked with *:
def test_file_content(content):
# test code
return (result, 'Test failed because...')
and test case code contains:
test.verify(*test_file_content('lorem'))
which works fine, but the each and every test case code has to contain a lot of test.verify(*... and test developers have to remember about it. Not to mention that it looks wet... (not DRY).
Yes! If you have access to Squish 6 there is some new functionality to do exactly that. The fixateResultContext() function will cause all results to be rewritten such that they appear to originate at an ancestor frame. See Documentation.
If you are using python this can be wrapped into a handy context manager
def resultsReportedAtCallsite(ancestorLevel = 1):
class Ctx:
def __enter__(self):
test.fixateResultContext(ancestorLevel + 1)
def __exit__(self, exc_type, exc_value, traceback):
test.restoreResultContext()
return Ctx()
def libraryFunction():
with resultsReportedAtCallsite():
test.compare("Apples", "Oranges")
Any later call to libraryFunction() that fails will point at the line of code containing libraryFunction(), and not the test.compare() within.

Categories

Resources