custom sys.excepthook doesn't work with pytest - python

I wanted to put results of pytest aserts into log.
First I tried this solution
def logged_assert(self, testval, msg=None):
if not testval:
if msg is None:
try:
assert testval
except AssertionError as e:
self.logger.exception(e)
raise e
self.logger.error(msg)
assert testval, msg
It work's fine but I need to use my own msg for every assert instead if build in. The problem is that testval evaluates when it passed into function and error msg is
AssertionError: False
I found an excellent way to solve the problem http://code.activestate.com/recipes/577074-logging-asserts/ here in first comment.
And I wrote this function in my logger wrapper module
def logged_excepthook(er_type, value, trace):
print('HOOK!')
if isinstance(er_type, AssertionError):
current = sys.modules[sys._getframe(1).f_globals['__name__']]
if 'logger' in sys.modules[current]:
sys.__excepthook__(er_type, value, trace)
sys.modules[current].error(exc_info=(er_type, value, trace))
else:
sys.__excepthook__(er_type, value, trace)
else:
sys.__excepthook__(er_type, value, trace)
and then
sys.excepthook = logged_excepthook
In test module, where i have asserts output of
import sys
print(sys.excepthook, sys.__excepthook__, logged_excepthook)
is
<function logged_excepthook at 0x02D672B8> <built-in function excepthook> <function logged_excepthook at 0x02D672B8>
But there is no 'Hook' message in my output. And also no ERROR message in my log files. All works like with builtin sys.excepthook.
I looked through pytest sources but sys.excepthook doesn't changed there.
But if I interrupt my code execution with Cntrl-C I got 'Hook' message in stdout.
The main question is why builtin sys.excepthook called isntead my custom function and how can I fix that.
But it also intresting to me if another way to log assert errors exists.
I am using python3.2 (32bit) at 64bit windows 8.1.

excepthook is only triggered if there is an unhandled exception, i.e. the one that normally terminates your program. Any exceptions in a test are handled by the test framework.
See Asserting with the assert statement - pytest documentation on how the feature is intended to be used. A custom message is specified the standard way: assert condition, failure_message.
If you're not satisfied with the way pytest handles asserts, you need to either
use a wrapper, or
hook the assert statement.
pytest uses an assert hook as well. Its logic is located in Lib\site-packages\_pytest\assertion (a stock plugin). It's probably enough to wrap/replace a few functions in there. To avoid patching the code base, you may be able to do with your own plugin: patch the exceptions plugin at runtime, or
disable it and reuse its functionality yourself instead.

Related

Catching Exception in a Property Function with Pytest

I have a pytest function as such:
def test_zork1_serial_number_error(zork1_unicode_error_serial):
"handles a serial code with a unicode error"
with pytest.raises(UnicodeDecodeError) as execinfo:
serial_code = zork1_unicode_error_serial.serial_code
assert serial_code == "XXXXXX"
The code that this hits is:
#property
def serial_code(self) -> str:
code_bytes = bytes(self.view[0x12:0x18])
try:
if code_bytes.count(b"\x00"):
print("111111111111")
return "XXXXXX"
return code_bytes.decode("ascii")
except UnicodeDecodeError:
print("222222222222")
return "XXXXXX"
The print statements were just there for me to validate that the appropriate path was being hit. When I run the test I get this:
zork1_unicode_error_serial = <zmachine.header.Header object at 0x10e320d60>
def test_zork1_serial_number_error(zork1_unicode_error_serial):
"handles a serial code with a unicode error"
with pytest.raises(UnicodeDecodeError) as execinfo:
serial_code = zork1_unicode_error_serial.serial_code
> assert serial_code == "XXXXXX"
E Failed: DID NOT RAISE <class 'UnicodeDecodeError'>
tests/header_test.py:42: Failed
------------------------------------------------------------------------------ Captured stdout setup ------------------------------------------------------------------------------
/Users/jnyman/AppDev/quendor/tests/../zcode/zork1-r15-sXXXXXX.z2
------------------------------------------------------------------------------ Captured stdout call -------------------------------------------------------------------------------
222222222222
Notice how the "222222222222" is captured in the standard output, thus the appropriate path is being hit and thus the exception is also clearly being generated. Yet Pytest is saying that this exception was not raised. (I have also tested this code manually as well to make sure the exception is being generated.)
I've also tried the path of instead "marking" the test as such, like this:
#pytest.mark.xfail(raises=UnicodeDecodeError)
def test_zork1_serial_number_error(zork1_unicode_error_serial):
...
And that passes. However, it also passes regardless of what exception I put in there. For example, if I do #pytest.mark.xfail(raises=IndexError) that also passes even though an IndexError is never raised.
I can't tell if this has something to do with the fact that what I'm testing is a property. Again, as can seen from the captured standard output, the appropriate code path is being executed and the exception is most definitely being raised. But perhaps the fact that my function is a property is causing an issue?
I have read this Python - test a property throws exception but that isn't using Pytest and it's unclear to me how to retrofit the thinking there. I also aware that perhaps throwing an exception in a property is not a good thing (referencing this: By design should a property getter ever throw an exception in python?) so maybe this test problem is pointing to a code smell. But I don't see an immediate to make this better without adding extra complication. And that still wouldn't explain why Pytest is not seeing the exception generated when it clearly is being generated.

Is there a way for pytest to check if a log entry was made at Error level or higher?

Python 3.8.0, pytest 5.3.2, logging 0.5.1.2.
My code has an input loop, and to prevent the program crashing entirely, I catch any exceptions that get thrown, log them as critical, reset the program state, and keep going. That means that a test that causes such an exception won't outright fail, so long as the output is still what is expected. This might happen if the error was a side effect of the test code but didn't affect the main tested logic. I would still like to know that the test is exposing an error-causing bug however.
Most of the Googling I have done shows results on how to display logs within pytest, which I am doing, but I can't find out if there is a way to expose the logs within the test, such that I can fail any test with a log at Error or Critical level.
Edit: This is a minimal example of a failing attempt:
test.py:
import subject
import logging
import pytest
#pytest.fixture(autouse=True)
def no_log_errors(caplog):
yield # Run in teardown
print(caplog.records)
# caplog.set_level(logging.INFO)
errors = [record for record in caplog.records if record.levelno >= logging.ERROR]
assert not errors
def test_main():
subject.main()
# assert False
subject.py:
import logging
logger = logging.Logger('s')
def main():
logger.critical("log critical")
Running python3 -m pytest test.py passes with no errors.
Uncommenting the assert statement fails the test without errors, and prints [] to stdout, and log critical to stderr.
Edit 2:
I found why this fails. From the documentation on caplog:
The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains only setup logs, same with the call and teardown phases
However, right underneath is what I should have found the first time:
To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and call stages during teardown like so:
#pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.levelno == logging.WARNING
]
if messages:
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)
However this still doesn't make a difference, and print(caplog.get_records("call")) still returns []
You can build something like this using the caplog fixture
here's some sample code from the docs which does some assertions based on the levels:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
since the records are the standard logging record types, you can use whatever you need there
here's one way you might do this ~more automatically using an autouse fixture:
#pytest.fixture(autouse=True)
def no_logs_gte_error(caplog):
yield
errors = [record for record in caplog.get_records('call') if record.levelno >= logging.ERROR]
assert not errors
(disclaimer: I'm a core dev on pytest)
You can use the unittest.mock module (even if using pytest) and monkey-patch whatever function / method you use for logging. Then in your test, you can have some assert that fails if, say, logging.error was called.
That'd be a short term solution. But it might also be the case that your design could benefit from more separation, so that you can easily test your application without a zealous try ... except block catching / suppressing just about everything.

Is it possible to raise a Python exception inside a ctypes callback that is called from C?

I have a shared library that I've wrapped using ctypes. The library exposes function pointers that can be used to modify its error-handling behaviour. Rather than simply printing a warning or terminating the process with exit(1), I'd like to raise a Python exception which could be caught and handled on the Python side.
Here's a sketch of what I'm trying to do:
import ctypes
mylib = ctypes.cdll.LoadLibrary('mylib.so')
error_handler_p = ctypes.c_void_p.in_dll(mylib, 'error_handler')
#ctypes.CFUNCTYPE(None, ctypes.c_char_p)
def custom_error_handler(message):
raise RuntimeError(message)
error_handler_p.value = ctypes.cast(custom_error_handler, ctypes.c_void_p).value
try:
mylib.do_something_bad()
except RuntimeError:
# maybe handle the exception here
At the moment it seems as though the exception is being raised within the callback, since I see a traceback with the expected error message in my STDERR. However, this exception does not seem to propagate up to the calling Python process, since the exception never gets caught and the calling process terminates normally.
You'd have to use the ctypes.PyDLL() class (via the ctypes.pydll loader) to access your library, and your C code would have to use the Python C API.
You 'raise' an exception in C code by calling one of the PyErr_* functions, and then returning -1 to flag an error from the function. The PyDLL() class will then check for an exception being set.
You can't use any of the other loaders. Note that the PyDLL() loader also doesn't release the GIL; that would be the responsibility of your extension instead (use the macros supplied by the Python API headers).
Note that since you already have to use the Python API just to raise exceptions, you may as well expose your C code as a proper Python extension.

Is there a way to abort a test if the settings library setup fails?

I have a library management_utils.py that's something like:
path = global_settings.get_rdio_base_path()
if path == "":
raise PathRequiredError("Path is required...")
def some_keyword():
# keyword requires path to be set to some valid value
In my test case file I have something like:
***Settings***
Library management_utils
***Test Cases***
Smoke Test
some keyword
...
Is it possible to abort running these test cases if the management_utils setup fails? Basically I'd like to abort execution of these test cases if PathRequiredError was raised in management_utils.py.
When I run the tests, I see the error being raised but execution continues on.
I saw in the Robot documentation you can set ROBOT_EXIT_ON_FAILURE = True in your error class but this doesn't seem to work for this case. Also ideally I'd be able to do something more granular so that it only aborts the test cases that require this Library, not all test execution.
Thank you!
The problem is that the exception is raised during library loading, since it is in the top level of module. ROBOT_EXIT_ON_FAILURE only effects if the failure comes from a keyword.
Instead, do this:
def get_path():
path = global_settings.get_rdio_base_path()
if path == "":
raise PathRequiredError("Path is required...")
def some_keyword():
path = get_path()
...
Now the exception is raised inside a keyword, and the test execution will be stopped.
As for the other point, there's no way to abort just some tests using ROBOT_EXIT_ON_FAILURE.

Ignore exceptions thrown and caught inside a library

The Python standard library and other libraries I use (e.g. PyQt) sometimes use exceptions for non-error conditions. Look at the following except of the function os.get_exec_path(). It uses multiple try statements to catch exceptions that are thrown while trying to find some environment data.
try:
path_list = env.get('PATH')
except TypeError:
path_list = None
if supports_bytes_environ:
try:
path_listb = env[b'PATH']
except (KeyError, TypeError):
pass
else:
if path_list is not None:
raise ValueError(
"env cannot contain 'PATH' and b'PATH' keys")
path_list = path_listb
if path_list is not None and isinstance(path_list, bytes):
path_list = fsdecode(path_list)
These exceptions do not signify an error and are thrown under normal conditions. When using exception breakpoints for one of these exceptions, the debugger will also break in these library functions.
Is there a way in PyCharm or in Python in general to have the debugger not break on exceptions that are thrown and caught inside a library without any involvement of my code?
in PyCharm, go to Run-->View Breakpoints, and check "On raise" and "Ignore library files".
The first option makes the debugger stop whenever an exception is raised, instead of just when the program terminates, and the second option gives PyCharm the policy to ignore library files, thus searching mainly in your code.
The solution was found thanks to CrazyCoder's link to the feature request, which has since been added.
For a while I had a complicated scheme which involved something like the following:
try( Closeable ignore = Debugger.newBreakSuppression() )
{
... library call which may throw ...
} <-- exception looks like it is thrown here
This allowed me to never be bothered by exceptions that were thrown and swallowed within library calls. If an exception was thrown by a library call and was not caught, then it would appear as if it occurred at the closing curly bracket.
The way it worked was as follows:
Closeable is an interface which extends AutoCloseable without declaring any checked exceptions.
ignore is just a name that tells IntelliJ IDEA to not complain about the unused variable, and it is necessary because silly java does not support try( Debugger.newBreakSuppression() ).
Debugger is my own class with debugging-related helper methods.
newBreakSuppression() was a method which would create a thread-local instance of some BreakSuppression class which would take note of the fact that we want break-on-exception to be temporarily suspended.
Then I had an exception breakpoint with a break condition that would invoke my Debugger class to ask whether it is okay to break, and the Debugger class would respond with a "no" if any BreakSuppression objects were instantiated.
That was extremely complicated, because the VM throws exceptions before my code has loaded, so the filter could not be evaluated during program startup, and the debugger would pop up a dialog complaining about that instead of ignoring it. (I am not complaining about that, I hate silent errors.) So, I had to have a terrible, horrible, do-not-try-this-at-home hack where the break condition would look like this: java.lang.System.err.equals( this ) Normally, this would never return
true, because System.err is not equal to a thrown exception, therefore the debugger would never break. However, when my Debugger class would get initialized, it would replace System.err with a class of its own,
which provided an implementation for equals(Object) and returned true if the debugger should break. So, essentially, I was using System.err as an eternal global variable.
Eventually I ditched this whole scheme because it is overly complicated and it performs very bad, because exceptions apparently get thrown very often in the java software ecosystem, so evaluating an expression every time an exception is thrown tremendously slows down everything.
This feature is not implemented yet, you can vote for it:
add ability to break (add breakpoint) on exceptions only for my files
There is another SO answer with a solution:
Debugging with pycharm, how to step into project, without entering django libraries
It is working for me, except I still go into the "_pydev_execfile.py" file, but I haven't stepped into other files after adding them to the exclusion in the linked answer.

Categories

Resources