I have a object created in a test case, and want to make test inside of its method.
But the exception is swallow by the try-except clause.
I know I can change raise the exception in run but it is not what I want. Is there a way that any unittest tool can handle this?
It seems that assertTrue method of unittest.TestCase is just a trivial assert clause.
class TestDemo(unittest.TestCase):
def test_a(self):
test_case = self
class NestedProc:
def method1(self):
print("flag show the method is running")
test_case.assertTrue(False)
def run(self):
try:
self.method1()
except:
pass # can raise here to give the exception but not what I want.
NestedProc().run() # no exception raised
# NestedProc().method1() # exception raised
EDIT
For clarity, I paste my realworld test case here. The most tricky thing here is that ParentProcess will always success leading to AssertError not correctly being propagated to test function.
class TestProcess(unittest.TestCase);
#pytest.mark.asyncio
async def test_process_stack_multiple(self):
"""
Run multiple and nested processes to make sure the process stack is always correct
"""
expect_true = []
def test_nested(process):
expect_true.append(process == Process.current())
class StackTest(plumpy.Process):
def run(self):
# TODO: unexpected behaviour here
# if assert error happend here not raise
# it will be handled by try except clause in process
# is there better way to handle this?
expect_true.append(self == Process.current())
test_nested(self)
class ParentProcess(plumpy.Process):
def run(self):
expect_true.append(self == Process.current())
proc = StackTest()
# launch the inner process
asyncio.ensure_future(proc.step_until_terminated())
to_run = []
for _ in range(100):
proc = ParentProcess()
to_run.append(proc)
await asyncio.gather(*[p.step_until_terminated() for p in to_run])
for proc in to_run:
self.assertEqual(plumpy.ProcessState.FINISHED, proc.state)
for res in expect_true:
self.assertTrue(res)
Any assert* method and even fail() just raises an exception. The easiest method is probably to manually set a flag and fail() afterwards:
def test_a(self):
success = True
class NestedProc:
def method1(self):
nonlocal success
success = False
raise Exception()
...
NestedProc().run()
if not success:
self.fail()
Related
I'm using unittest and mock for testing a script which looks like this
class Hi:
def call_other(self):
perform some operation
sys.exit(1)
def f(self):
try:
res = self.do_something()
a = self.something_else(res)
except Exception as e:
print(e)
call_other()
print("hi after doing something") -----> (this_print)
def process(self)
self.f()
and my test script looks like this
class Test_hi(unittest.TestCase)
def mock_call_other(self):
print("called during error")
def test_fail_scenario():
import Hi class here
h = Hi()
h.process()
h.do_something = mock.Mock(retrun_value="resource")
h.something_else = mock.Mock(side_effect=Exception('failing on purpose for testing'))
h.call_other(side_effect=self.mock_call_other) -----> (this_line)
If I don't mock call_other method it will call sys.exit(1) and it will cause some problem in unittest running, so,
I don't want to call sys.exit(1) in call_other during testing.
However, if I mock call_other method as above (in this_line) it will simply print something and continue the execution of method f. Meaning, it will execute the print statement (in this_print)
That should not be the case in the real program, when the exception is caught it will do a sys.exit(1) and stop the program.
How can I achieve the same using unittest and mock when the exception is caught I want to stop the execution of this testcase and move on to next test case.
How to achieve this? Please help
You can use unittest's functionality to assert if you're expecting exceptions without the need of mock:
import unittest
import sys
class ToTest:
def foo(self):
raise SystemExit(1)
def bar(self):
sys.exit(1)
def foo_bar(self):
print("This is okay")
return 0
class Test(unittest.TestCase):
def test_1(self):
with self.assertRaises(SystemExit) as cm:
ToTest().foo()
self.assertEqual(cm.exception.code, 1)
def test_2(self):
with self.assertRaises(SystemExit) as cm:
ToTest().bar()
self.assertEqual(cm.exception.code, 1)
def test_3(self):
self.assertEqual(ToTest().foo_bar(), 0)
First of all, sorry for the wording of the question, I can't express it in a more compact form.
Let's say I have a code like this in Python:
something_happened = False
def main():
# 'procs' is a list of procedures
for proc in procs:
try:
# Any of these can set the 'something_happened'
# global var to True
proc()
except as e:
handle_unexpected_exception(e)
continue
# If some procedure found some problem,
# print a remainder to check the logging files
if something_happened:
print('Check the logfile, just in case.')
Any of the involved procedures may encounter some problem but execution MUST continue, the problem is properly logged and that's the ONLY handling needed, really, because the problems that may arise while running the procedures shouldn't stop the program, this shouldn't involve raising an exception and stopping the execution.
The reason why the logfile should be checked is that some of the problems may need further human action, but the program can't do anything about them, other than logging them and keep running (long story).
Right now the only way of achieving this that I can think about is to make each procedure to set something_happened == True after logging a potential problem, but using a global variable which may be set from any of the procedures, or returning a status code from the procedures.
And yes, I know I can raise an exception from the procedures instead of setting a global or returning an error code, but that would only work because I'm running them in a loop, and this may change in the future (and then raising an exception will jump out the try-block), so that's my last resort.
Can anyone suggest a better way of dealing with this situation? Yes, I know, this is a very particular use case, but that's the reason why I'm not raising an exception in the first place, and I'm just curious because I didn't find anything after googling for hours...
Thanks in advance :)
You have a variable that may be set to True by any of the procs. It looks like a common OOP schema:
class A():
"""Don't do that"""
def __init__(self, logger):
self._logger = logger
self._something_happened = False
def proc1(self):
try:
...
except KeyError as e:
self._something_happened = True
self._logger.log(...)
def proc2(self):
...
def execute(self):
for proc in [self.proc1, self.proc2, ...]:
try:
proc()
except as e:
self._handle_unexpected_exception(e)
continue
if self._something_happened:
print('Check the logfile, just in case.')
But that's a very bad idea, because you're violating the Single Responsibility Principle: your classs has to know about proc1, proc2, ... You have to reverse the idea:
class Context:
def __init__(self):
self.something_happened = False
def main():
ctx = Context()
for proc in procs:
try:
proc(ctx) # proc may set ctx.something_happened to True
except as e:
handle_unexpected_exception(e)
continue
if ctx.something_happened:
print('Check the logfile, just in case.')
Creating a void class like that is not attracting. You can take the idea further:
class Context:
def __init__(self, logger):
self._logger = logger
self._something_happened = False
def handle_err(self, e):
self._something_happened = True
self._logger.log(...)
def handle_unexpected_exception(self, e):
...
self._logger.log(...)
def after(self):
if self._something_happened:
print('Check the logfile, just in case.')
def proc1(ctx):
try:
...
except KeyError as e:
ctx.handle_err(e) # you delegate the error handling to ctx
def proc2(ctx):
...
def main():
ctx = Context(logging.gerLogger("main"))
for proc in procs:
try:
proc(ctx)
except as e:
ctx.handle_unexpected_exception(e)
ctx.after()
The main benefit here is you that can use another Context if you want:
def StrictContext():
def handle_err(self, e):
raise e
def handle_unexpected_exception(self, e):
raise e
def after(self):
pass
Or
class LooseContext:
def handle_err(self, e):
pass
def handle_unexpected_exception(self, e):
pass
def after(self):
pass
Or whatever you need.
Looks like the cleaner solution is to raise an exception, and I will change the code accordingly. They only problem is what will happen if in the future the loop goes away, but I suppose I'll cross that bridge when I arrive to it ;) and then I'll use another solution or I'll try to change the main code miself.
#cglacet, #Phydeaux, thanks for your help and suggestions.
I have a class like:
class TestClass(object):
def __init__(self, *args):
try:
## check some condition
except:
return
## Should exit class
def do_something_else(self):
...
def return_something(self):
## return something
Now I am trying to call the class like:
TestClass(arg1, arg2, ..).do_something_else()
somthing = TestClass(arg1, arg2, ..).return_something()
When I execute the first command, my conditions fails and raise an exception.
What I want is that if some exception occurs in __init__ function then do_something_method should not be called and control flow should go to the second command.
In the second command, all conditions are met and the return_something function should be called.
How can I achieve this?
Maybe I'm wrong, but I'd keep it simple, using a flag variable and doing this way:
class TestClass(object):
def __init__(self, *args):
self.flag=False
try:
## check some condition
except:
self.flag=True
def do_something_else(self):
if self.flag:
#do what you want, e.g. call a second command
return
...
def return_something(self):
## return something
I would suggest you to handle the exceptional condition in a separate function rather than inside the constructor
Instead of
TestClass(arg1, arg2, ..).do_something_else()
do
try:
obj = TestClass(arg1,arg2)
except:
pass
else:
obj.do_something_else()
And remove the try/except statement from the init method.
You shouldn't return anything from __init__ method.
You can just create an Object of the class TestClass and return "True" from try block and "False" from except block. Check if the value is True or Flase and execute the the required function.
Creating an object will automatically triiger the init method and return true or false based on your condition. Check that returned value to decide whether to execute required method or not.
I am using pytest with selenium to automate a website. I want to take some screen shot only when a test case fails. I have previosly used TestNG and with TestNG it's quite east using the ITestListner.
Do we have something like that in pytest.
I have tried to achieve this using the teardown_method()
But this method is not getting executed when a test case fails.
import sys
from unittestzero import Assert
class TestPY:
def setup_method(self, method):
print("in setup method")
print("executing " + method.__name__)
def teardown_method(self, method):
print(".....teardown")
if sys.exc_info()[0]:
test_method_name = method
print test_method_name
def test_failtest(self):
Assert.fail("failed test")
teardown_method() get executed only when there are no fails
According to you post on stackoverflow, I can share that I is something on my mind, I hope it will help:wink:
What you're trying to do is to handle standard AssertionError exception that can be raised by assert keyword or by any assertion method implemented in unittest.TestCase or maybe any custom assertion method that raises custom exception.
There are 3 ways to do that:
Use try-except-finally construction. Some basic example:
try:
Assert.fail("failed test")
except AssertionError:
get_screenshot()
raise
Or use with statement, as context manager:
class TestHandler:
def __enter__(self):
# maybe some set up is expected before assertion method call
pass
def __exit__(self, exc_type, exc_val, exc_tb):
# catch whether exception was raised
if isinstance(exc_val, AssertionError):
get_screenshot()
with TestHandler():
Assert.fail("failed test")
here you can dive deeper on how to play with it
The last one, in my opinion, is the most elegant approach. Using decorators. With this decorator you can decorate any testing method:
def decorator_screenshot(func):
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except AssertionError:
get_screenshot()
raise
return wrapper
#decorator_screenshot
def test_something():
Assert.fail("failed test")
After some struggle, eventually this worked for me.
In conftest.py:
#pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_runtest_makereport(item, call):
outcome = yield
rep = outcome.get_result()
setattr(item, "rep_" + rep.when, rep)
return rep
And, in your code, in a fixture (e.g., in a teardown fixture for tests) use it like so:
def tear_down(request):
method_name = request.node.name
if request.node.rep_call.failed:
print('test {} failed :('.format(method_name))
# do more stuff like take a selenium screenshot
Note that "request" is a fixture "funcarg" that pytest provides in the context of your tests. You don't have to define it yourself.
Sources: pytest examples and thread on (not) making this easier.
This is how we do it , note __multicall__ has very less documentation and I remember reading __multicall__ is going to be deprecated, please use this with a pinch of salt and experiment with replacing __multicall__ with 'item, call' as per the examples.
def pytest_runtest_makereport(__multicall__):
report = __multicall__.execute()
if report.when == 'call':
xfail = hasattr(report, 'wasxfail')
if (report.skipped and xfail) or (report.failed and not xfail):
try:
screenshot = APP_DRIVER.take_screen_shot(format="base64")
except Exception as e:
LOG.debug("Error saving screenshot !!")
LOG.debug(e)
return report
def pytest_runtest_makereport(item, call):
if call.when == 'call':
if call.excinfo is not None:
# if excinfor is not None, indicate that this test item is failed test case
error("Test Case: {}.{} Failed.".format(item.location[0], item.location[2]))
error("Error: \n{}".format(call.excinfo))
I was wondering, is there a simple magic method in python that allows customization of the behaviour of an exception-derived object when it is raised? I'm looking for something like __raise__ if that exists. If no such magic methods exist, is there any way I could do something like the following (it's just an example to prove my point):
class SpecialException(Exception):
def __raise__(self):
print('Error!')
raise SpecialException() #this is the part of the code that must stay
Is it possible?
I don't know about such magic method but even if it existed it is just some piece of code that gets executed before actually raising the exception object. Assuming that its a good practice to raise exception objects that are instantiated in-place you can put such code into the __init__ of the exception. Another workaround: instead of raising your exception directly you call an error handling method/function that executes special code and then finally raises an exception.
import time
from functools import wraps
def capture_exception(callback=None, *c_args, **c_kwargs):
"""捕获到异常后执行回调函数"""
assert callable(callback), "callback 必须是可执行对象"
def _out(func):
#wraps(func)
def _inner(*args, **kwargs):
try:
res = func(*args, **kwargs)
return res
except Exception as e:
callback(*c_args, **c_kwargs)
raise e
return _inner
return _out
def send_warning():
print("warning message..............")
class A(object):
#capture_exception(callback=send_warning)
def run(self):
print('run')
raise SystemError("测试异常捕获回调功能")
time.sleep(0.2)
if __name__ == '__main__':
a = A()
a.run()