I'm starting to use Behave to implement some tests. I would like to replace some of my existing unittest (which are more feature tests). Some of these uses assertRaises to check that certain calls to the back-end service raise the errors they should. Is it possible to have something similar in Behave (or maybe rather Gherkin)?
The following unittest calls my backend service and as a guest has logged on, is not able to perform the admin task (do_admin_task). It should raise an exception.
def test_mycall(self):
service = myservice('guest', 'pwd')
self.assertRaises(NoPermission, service.do_admin_task, some_param)
In my feature file, how would I create my scenario? Like this?
scenario: test guest can't do an admin task
given I log on to my service as guest / pwd
when I try to perform my admin task
then it should fail saying NoPermission
I believe that this will already raise an exception in the when step, so won't even get to the then step.
One potential way I could imagine around this is to create a specific step that performs both of these steps and does the exception handling. If I however want to mock errors in lower level calls, then I would have to re-write many of these steps, which is exactly what I'm hoping to avoid by switching to Behave in the first place.
How should I approach this?
When thinking on the Gherkin level, the exception is an expected outcome of the when step. So the step definition should have a try block and store the result/exception in the context. The then step can check this result/exception then.
#When(u'I try to perform my admin task')
def step_impl(context):
try:
context.admintaskresult = myservice(context.user, context.pass)
context.admintaskexception = None
except Exception as ex:
context.admintaskresult = None
context.admintaskexception = ex
#Then(u'it should fail saying NoPermission')
def step_impl(context):
assert isinstance(context.admintaskexception, NoPermissionException)
Related
There two lines that are not being executed by django tests when they are called as self.assertRaises.
I am using: Python 3.6.9, Django 3, Coverage.
I have this class:
class AverageWeatherService:
subclasses = WeatherService.__subclasses__()
valid_services = {
subclass.service_key: subclass for subclass in subclasses
}
#classmethod
def _check_service(cls, one_service):
if one_service not in cls.valid_services:
logger.exception("Not valid service sent")
raise NotValidWeatherFormException("Not valid service sent")
And I have a local API that is up in my pc.
Then I wrote this test:
def test_integration_average_temp_services_error(self):
self.assertRaises
(
NotValidWeatherFormException,
AverageWeatherService()._check_service,
"MyFakeService",
)
And although the test is successful with assert raises properly used this test is not adding coverage but If I call this method in a wrong way like this one:
def test_integration_average_temp_services_error2(self):
self.assertRaises
(
NotValidWeatherFormException,
AverageWeatherService()._check_service("MyFakeService")
)
Then of course I get an error running the test because the exception is raised and not properly catched by assertRaises BUT It adds coverage. If I run this test wrongly I have my code 100% covered. If I use assertRaises as the first way these two lines are not being covered (According to coverage html).
logger.exception("Not valid service sent")
raise NotValidWeatherFormException("Not valid service sent")
Also If I execute the method as the first way, the logger exception is not shown in console and when I run tests as the second way I am able to visualize the logger.exception on the terminal.
Any ideas of what is going on?
Thanks in advance.
I could solve it.
This is the workaround:
def test_integration_average_temp_services_error(self):
with self.assertRaises(NotValidWeatherFormException):
AverageWeatherService()._check_service("MyFakeService")
Python 3.8.0, pytest 5.3.2, logging 0.5.1.2.
My code has an input loop, and to prevent the program crashing entirely, I catch any exceptions that get thrown, log them as critical, reset the program state, and keep going. That means that a test that causes such an exception won't outright fail, so long as the output is still what is expected. This might happen if the error was a side effect of the test code but didn't affect the main tested logic. I would still like to know that the test is exposing an error-causing bug however.
Most of the Googling I have done shows results on how to display logs within pytest, which I am doing, but I can't find out if there is a way to expose the logs within the test, such that I can fail any test with a log at Error or Critical level.
Edit: This is a minimal example of a failing attempt:
test.py:
import subject
import logging
import pytest
#pytest.fixture(autouse=True)
def no_log_errors(caplog):
yield # Run in teardown
print(caplog.records)
# caplog.set_level(logging.INFO)
errors = [record for record in caplog.records if record.levelno >= logging.ERROR]
assert not errors
def test_main():
subject.main()
# assert False
subject.py:
import logging
logger = logging.Logger('s')
def main():
logger.critical("log critical")
Running python3 -m pytest test.py passes with no errors.
Uncommenting the assert statement fails the test without errors, and prints [] to stdout, and log critical to stderr.
Edit 2:
I found why this fails. From the documentation on caplog:
The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains only setup logs, same with the call and teardown phases
However, right underneath is what I should have found the first time:
To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and call stages during teardown like so:
#pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.levelno == logging.WARNING
]
if messages:
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)
However this still doesn't make a difference, and print(caplog.get_records("call")) still returns []
You can build something like this using the caplog fixture
here's some sample code from the docs which does some assertions based on the levels:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
since the records are the standard logging record types, you can use whatever you need there
here's one way you might do this ~more automatically using an autouse fixture:
#pytest.fixture(autouse=True)
def no_logs_gte_error(caplog):
yield
errors = [record for record in caplog.get_records('call') if record.levelno >= logging.ERROR]
assert not errors
(disclaimer: I'm a core dev on pytest)
You can use the unittest.mock module (even if using pytest) and monkey-patch whatever function / method you use for logging. Then in your test, you can have some assert that fails if, say, logging.error was called.
That'd be a short term solution. But it might also be the case that your design could benefit from more separation, so that you can easily test your application without a zealous try ... except block catching / suppressing just about everything.
I have a library management_utils.py that's something like:
path = global_settings.get_rdio_base_path()
if path == "":
raise PathRequiredError("Path is required...")
def some_keyword():
# keyword requires path to be set to some valid value
In my test case file I have something like:
***Settings***
Library management_utils
***Test Cases***
Smoke Test
some keyword
...
Is it possible to abort running these test cases if the management_utils setup fails? Basically I'd like to abort execution of these test cases if PathRequiredError was raised in management_utils.py.
When I run the tests, I see the error being raised but execution continues on.
I saw in the Robot documentation you can set ROBOT_EXIT_ON_FAILURE = True in your error class but this doesn't seem to work for this case. Also ideally I'd be able to do something more granular so that it only aborts the test cases that require this Library, not all test execution.
Thank you!
The problem is that the exception is raised during library loading, since it is in the top level of module. ROBOT_EXIT_ON_FAILURE only effects if the failure comes from a keyword.
Instead, do this:
def get_path():
path = global_settings.get_rdio_base_path()
if path == "":
raise PathRequiredError("Path is required...")
def some_keyword():
path = get_path()
...
Now the exception is raised inside a keyword, and the test execution will be stopped.
As for the other point, there's no way to abort just some tests using ROBOT_EXIT_ON_FAILURE.
The Python standard library and other libraries I use (e.g. PyQt) sometimes use exceptions for non-error conditions. Look at the following except of the function os.get_exec_path(). It uses multiple try statements to catch exceptions that are thrown while trying to find some environment data.
try:
path_list = env.get('PATH')
except TypeError:
path_list = None
if supports_bytes_environ:
try:
path_listb = env[b'PATH']
except (KeyError, TypeError):
pass
else:
if path_list is not None:
raise ValueError(
"env cannot contain 'PATH' and b'PATH' keys")
path_list = path_listb
if path_list is not None and isinstance(path_list, bytes):
path_list = fsdecode(path_list)
These exceptions do not signify an error and are thrown under normal conditions. When using exception breakpoints for one of these exceptions, the debugger will also break in these library functions.
Is there a way in PyCharm or in Python in general to have the debugger not break on exceptions that are thrown and caught inside a library without any involvement of my code?
in PyCharm, go to Run-->View Breakpoints, and check "On raise" and "Ignore library files".
The first option makes the debugger stop whenever an exception is raised, instead of just when the program terminates, and the second option gives PyCharm the policy to ignore library files, thus searching mainly in your code.
The solution was found thanks to CrazyCoder's link to the feature request, which has since been added.
For a while I had a complicated scheme which involved something like the following:
try( Closeable ignore = Debugger.newBreakSuppression() )
{
... library call which may throw ...
} <-- exception looks like it is thrown here
This allowed me to never be bothered by exceptions that were thrown and swallowed within library calls. If an exception was thrown by a library call and was not caught, then it would appear as if it occurred at the closing curly bracket.
The way it worked was as follows:
Closeable is an interface which extends AutoCloseable without declaring any checked exceptions.
ignore is just a name that tells IntelliJ IDEA to not complain about the unused variable, and it is necessary because silly java does not support try( Debugger.newBreakSuppression() ).
Debugger is my own class with debugging-related helper methods.
newBreakSuppression() was a method which would create a thread-local instance of some BreakSuppression class which would take note of the fact that we want break-on-exception to be temporarily suspended.
Then I had an exception breakpoint with a break condition that would invoke my Debugger class to ask whether it is okay to break, and the Debugger class would respond with a "no" if any BreakSuppression objects were instantiated.
That was extremely complicated, because the VM throws exceptions before my code has loaded, so the filter could not be evaluated during program startup, and the debugger would pop up a dialog complaining about that instead of ignoring it. (I am not complaining about that, I hate silent errors.) So, I had to have a terrible, horrible, do-not-try-this-at-home hack where the break condition would look like this: java.lang.System.err.equals( this ) Normally, this would never return
true, because System.err is not equal to a thrown exception, therefore the debugger would never break. However, when my Debugger class would get initialized, it would replace System.err with a class of its own,
which provided an implementation for equals(Object) and returned true if the debugger should break. So, essentially, I was using System.err as an eternal global variable.
Eventually I ditched this whole scheme because it is overly complicated and it performs very bad, because exceptions apparently get thrown very often in the java software ecosystem, so evaluating an expression every time an exception is thrown tremendously slows down everything.
This feature is not implemented yet, you can vote for it:
add ability to break (add breakpoint) on exceptions only for my files
There is another SO answer with a solution:
Debugging with pycharm, how to step into project, without entering django libraries
It is working for me, except I still go into the "_pydev_execfile.py" file, but I haven't stepped into other files after adding them to the exclusion in the linked answer.
This problem is partly due to my lack of completely understanding scoping in python, so I'll need to review that. Either way, here is a seriously trivial piece of code that keeps crashing on my Django test app.
Here's a snippet:
#login_required
def someview(request):
try:
usergroup = request.user.groups.all()[0].name
except:
HttpResponseRedirect('/accounts/login')
if 'client' in usergroup:
stafflist = ProxyUserModel.objects.filter(groups__name='staff')
No brain surgery here, the problem is I get an error such as the following:
File "/usr/local/django/myapp/views.py", line 18, in someview
if 'client' in usergroup:
UnboundLocalError: local variable 'usergroup' referenced before assignment
My question here is, why is usergroup unbound? If it's unbound, that means the try statement had an exception thrown at which point an HttpResponseRedirect should happen, but it never does. Instead I get an HTTP 500 error back, which is slightly confusing.
Yes I can write smarter code and ensure that the user logging in definitely has a group associated with them. But this isn't a production app, I'm just trying to understand / learn Python/Django. Why exactly is the above happening when a user that's not associated with a group logs in instead of a redirect to a login page?
In this case I'm intentionally logging in as a user that isn't part of a group. That means that the above code should throw an IndexError exception like the following:
>>> somelist = []
>>> print somelist[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: list index out of range
HttpResponseRedirect('/accounts/login')
You're creating it but not returning it. Flow continues to the next line, which references usergroup despite it never having been assigned due to the exception.
The except is also troublesome. In general you should never catch ‘everything’ (except: or except Exception:) as there are lots of odd conditions in there you could be throwing away, making debugging very difficult. Either catch the specific exception subclass that you think is going to happen when the user isn't logged on, or, better, use an if test to see if they're logged on. (It's not really an exceptional condition.)
eg. in Django normally:
if not request.user.is_authenticated():
return HttpResponseRedirect('/accounts/login')
or if your concern is the user isn't in any groups (making the [0] fail):
groups= request.user.groups.all()
if len(groups)==0:
return HttpResponseRedirect('/accounts/login')
usergroup= groups[0].name
Try moving you if 'client' part inside you try block. Either that or define usergroup = None right above the try.
In cases where you have a try…except suite and you want code to run iff no exceptions have occurred, it's a good habit to write the code as follows:
try:
# code that could fail
except Exception1:
# handle exception1
except Exception2:
# handle exception2
else: # the code-that-could-fail didn't
# here runs the code that depends
# on the success of the try clause