I wrote a unittest to test timeout with the requests package
my_module.py:
import requests
class MyException(Exception): pass
def my_method():
try:
r = requests.get(...)
except requests.exceptions.Timeout:
raise MyException()
Unittest:
from mock import patch
from unittest import TestCase
from requests.exceptions import Timeout
from my_module import MyException
#patch('my_module.requests')
class MyUnitTest(TestCase):
def my_test(self, requests):
def get(*args, **kwargs):
raise Timeout()
requests.get = get
try:
my_module.my_method(...)
except MyException:
return
self.fail("No Timeout)
But when it runs, the try block in my_method never catches the requests.exceptions.Timeout
There are two problems I see here. One that directly fixes your problem, and the second is a slight misuse of the Mocking framework that further simplifies your implementation.
First, to directly address your issue, based on how you are looking to test your assertion, what you are actually looking to do here:
requests.get = get
You should be using a side_effect here to help raise your exception. Per the documentation:
side_effect allows you to perform side effects, including raising an
exception when a mock is called
With that in mind, all you really need to do is this:
requests.get.side_effect = get
That should get your exception to raise. However, chances are you might face this error:
TypeError: catching classes that do not inherit from BaseException is not allowed
This can be best explained by actually reading this great answer about why that is happening. With that answer, taking that suggestion to actually only mock out what you need will help fully resolve your issue. So, in the end, your code will actually look something like this, with the mocked get instead of mocked requests module:
class MyUnitTest(unittest.TestCase):
#patch('my_module.requests.get')
def test_my_test(self, m_get):
def get(*args, **kwargs):
raise Timeout()
m_get.side_effect = get
try:
my_method()
except MyException:
return
You can now actually further simplify this by making better use of what is in unittest with assertRaises instead of the try/except. This will ultimately just assert that the exception was raised when the method is called. Furthermore, you do not need to create a new method that will raise a timeout, you can actually simply state that your mocked get will have a side_effect that raises an exception. So you can replace that entire def get with simply this:
m_get.side_effect = Timeout()
However, you can actually directly put this in to your patch decorator, so, now your final code will look like this:
class MyUnitTest(unittest.TestCase):
#patch('my_module.requests.get', side_effect=Timeout())
def test_my_test(self, m_get):
with self.assertRaises(MyException):
my_method()
I hope this helps!
patch('my_module.requests') will replace my_module.requests with a new mock object, but in your test method you replace the requests.get method of the directly imported and therefore on the original requests module, which means that change is not reflected within your module.
It should work if in your test method you replace it on the requests mock within your my_module instead:
my_module.requests.get = get
Related
I need to force an exception to be raised outside a function that does this:
def foo(x):
try:
some_calculation(x)
except:
print("ignore exception")
Is there a way to override the catch-all inside foo? I would like to raise an exception inside some_calculation(x) that can be caught or detected outside foo.
FYI, foo is a third party function I have no control over.
No. Your options are:
submit a fix to the library maintainers
fork the library, and provide your own vendorised version
monkey patch the library on import
The last is perhaps the easiest and quickest to get up and running.
Example:
main.py
# before doing anything else import and patch the third party library
# we need to patch foo before anyone else has a chance to import or use it
import third_party_library
# based off of third_party_library version 1.2.3
# we only catch Exception rather than a bare except
def foo(x):
try:
some_calculation(x)
except Exception:
print("ignore exception")
third_party_library.foo = foo
# rest of program as usual
...
Things might be slightly more complicated than that if foo() is re-exported across several different modules (if the third party library has its own from <x> import foo statements. But if just requires monkey patching more attributes of the various re-exporting modules.
Technically it would be possible to force an exception to be raised, but it would involve setting an execution trace and forcing an exception to be thrown in the exception handling code of the foo(). It would be weird, the exception would appear to come from print("ignore exception") rather than
some_calculation(x). So don't do that.
Working on testing custom based exceptions within python3, within the client code I have:
class myCustomException(Exception):
pass
def someFunc():
try:
mathCheck = 2/0
print(mathCheck)
except ZeroDivisionError as e:
raise myCustomException from e
On the test side:
def testExceptionCase(self):
with self.assertRaises(ZeroDivisionError) as captureException:
self.someFunc()
My question is:
How to essentially capture the chained exception i.e. the myCustomException using unittest (so proving that the custom exception did get called and raised from the base exception which is ZeroDivisonError), assume I have already done the import of unittest, and imports within client-test files.
Is there a way to say we were able to keep track of the traceback chaining from ZeroDivisionError and myCustomException. Basically, this test should also fail if it didn't raise the myCustomException. Appreciate any help!
from client import MyCustomException
def testExceptionCase(self):
with self.assertRaises(MyCustomException):
self.someFunc()
Also you may want to use UpperCamelCase for Exceptions and Classes in general
I have the following code that I'm trying to test:
great_report.py
from retry import retry
#retry((ReportNotReadyException), tries=3, delay=10, backoff=3)
def get_link(self):
report_link = _get_report_link_from_3rd_party(params)
if report_link:
return report_link
else:
stats.count("report_not_ready", 1)
raise ReportNotReadyException
I've got my testing function which mocks _get_report_link_from_3rd_party which tests everything but I don't want this function to actually pause execution during when I run tests..
#mock.patch('repo.great_report._get_report_link_from_3rd_party', return_value=None)
test_get_link_raises_exception(self, mock_get_report_link):
self.assertRaises(ReportNotReadyException, get_link)
I tried mocking the retry parameters but am running into issues where get_link keeps retrying over and over which causes long build times instead of just raising the exception and continuing. How can I mock the parameters for the #retry call in my test?
As hinted here, an easy way to prevent the actual sleeping is by patching the time.sleep function. Here is the code that did that for me:
#patch('time.sleep', side_effect = lambda _: None)
There is no way to change decorators parameters after load the module. Decorators decorate the original function and change it at the module load time.
First I would like encourage you to change your design a little to make it more testable.
If you extract the body of get_link() method test the new method and trust retry decorator you will obtain your goal.
If you don't want add a new method to your class you can use a config module that store variables that you use when call retry decorator. After that you can use two different module for testing and production.
The last way is the hacking way where you replace retry.api.__retry_internal by a your version that invoke the original one by changing just the variables:
import unittest
from unittest.mock import *
from pd import get_link, ReportNotReadyException
import retry
orig_retry_internal = retry.api.__retry_internal
def _force_retry_params(new_tries=-1, new_delay=0, new_max_delay=None, new_backoff=1, new_jitter=0):
def my_retry_internals(f, exceptions, tries, delay, max_delay, backoff, jitter, logger):
# call original __retry_internal by new parameters
return orig_retry_internal(f, exceptions, tries=new_tries, delay=new_delay, max_delay=new_max_delay,
backoff=new_backoff, jitter=new_jitter, logger=logger)
return my_retry_internals
class MyTestCase(unittest.TestCase):
#patch("retry.api.__retry_internal", side_effect=_force_retry_params(new_tries=1))
def test_something(self, m_retry):
self.assertRaises(ReportNotReadyException, get_link, None)
IMHO you should use that hacking solution only if you are with the back on the wall and you have no chance to redesign you code to make it more testable. The internal function/class/method can change without notice and your test can be difficult to maintain in the future.
I have this code (tries to test internet connection with a non-blocking call):
#!/usr/bin/env python
from tornado.httpclient import AsyncHTTPClient
http_client = AsyncHTTPClient()
a = ''
def on_g(response):
if response.error:
on_b()
else:
global a
a = response.body
return True
http_client.fetch("http://www.google.com/", on_g)
def on_b(response):
if response.error:
return False
else:
return True
http_client.fetch("http://www.baidu.com/", on_b)
How can i call on_g() or on_b() for debug purpose? because it needs an argument which is the response.
For testing, you could always mock the parameter with a simple wrapper around dict; for example:
class Mock(dict):
def __getattr__(self, key):
return self[key]
on_b(Mock({
'error': False,
'body': 'Mock body',
}))
And then check if the return value/changes in global state match your expectations.
Creating exactly the same response object as Tornado is not necessary, since Python does duck typing. Passing a mock object where you're 100% sure of what the various parameters are is often better for testing, since it's much more transparent.
There are larger libraries (such as mock) that you'll probably want to use if you're doing this more often than a handful of times; but this should work okay for quick testing, or just a few simple tests.
You'll need to mock the response object. I've done this using tornado previously, and it works just fine. This also gives you the control that you can test arbitrary responses that you would not otherwise be able to reproduce easily. I recommend the mock package, but you can also do it by hand.
Here's an example using the mock package:
from mock import Mock
on_b(Mock(error=False, body='body'))
Here's an example by hand:
class Mock(object):
pass
mock = Mock()
mock.error = False
mock.body = 'body'
on_b(mock)
I'm not quite sure what you are asking but the callback functions are just functions, and can be called like any other python function
on_g(some_object)
on_b(some_object)
You could import them and call them in tests.
Quick background: writing a module. One of my objects has methods that may or may not be successfully completed - depending on the framework used underneath my module. So a few methods first need to check what framework they actually have under their feet. Current way of tackling this is:
def framework_dependent_function():
try:
import module.that.may.not.be.available
except ImportError:
# the required functionality is not available
# this function can not be run
raise WrongFramework
# or should I just leave the previous exception reach higher levels?
[ ... and so on ... ]
Yet something in my mind keeps telling me that doing imports in the middle of a file is a bad thing. Can't remember why, can't even come up with a reason - apart from slightly messier code, I guess.
So, is there anything downright wrong about doing what I'm doing here? Perhaps other ways of scouting what environment the module is running in, somewhere near __init__?
This version may be faster, because not every call to the function needs to try to import the necessary functionality:
try:
import module.that.may.not.be.available
def framework_dependent_function():
# whatever
except ImportError:
def framework_dependent_function():
# the required functionality is not available
# this function can not be run
raise NotImplementedError
This also allows you to do a single attempt to import the module, then define all of the functions that might not be available in a single block, perhaps even as
def notimplemented(*args, **kwargs):
raise NotImplementedError
fn1 = fn2 = fn3 = notimplemented
Put this at the top of your file, near the other imports, or in a separate module (my current project has one called utils.fixes). If you don't like function definition in a try/except block, then do
try:
from module.that.may.not.be.available import what_we_need
except ImportError:
what_we_need = notimplemented
If these functions need to be methods, you can then add them to your class later:
class Foo(object):
# assuming you've added a self argument to the previous function
framework_dependent_method = framework_dependent_function
Similar to larsmans suggestion but with a slight change
def NotImplemented():
raise NotImplementedError
try:
import something.external
except ImportError:
framework_dependent_function = NotImplemented
def framework_dependent_function():
#whatever
return
I don't like the idea of function definitions in the try: except: of the import
You could also use imp.find_module (see here) in order to check for the presence of a specific module.