Calling a callback function alone? - python

I have this code (tries to test internet connection with a non-blocking call):
#!/usr/bin/env python
from tornado.httpclient import AsyncHTTPClient
http_client = AsyncHTTPClient()
a = ''
def on_g(response):
if response.error:
on_b()
else:
global a
a = response.body
return True
http_client.fetch("http://www.google.com/", on_g)
def on_b(response):
if response.error:
return False
else:
return True
http_client.fetch("http://www.baidu.com/", on_b)
How can i call on_g() or on_b() for debug purpose? because it needs an argument which is the response.

For testing, you could always mock the parameter with a simple wrapper around dict; for example:
class Mock(dict):
def __getattr__(self, key):
return self[key]
on_b(Mock({
'error': False,
'body': 'Mock body',
}))
And then check if the return value/changes in global state match your expectations.
Creating exactly the same response object as Tornado is not necessary, since Python does duck typing. Passing a mock object where you're 100% sure of what the various parameters are is often better for testing, since it's much more transparent.
There are larger libraries (such as mock) that you'll probably want to use if you're doing this more often than a handful of times; but this should work okay for quick testing, or just a few simple tests.

You'll need to mock the response object. I've done this using tornado previously, and it works just fine. This also gives you the control that you can test arbitrary responses that you would not otherwise be able to reproduce easily. I recommend the mock package, but you can also do it by hand.
Here's an example using the mock package:
from mock import Mock
on_b(Mock(error=False, body='body'))
Here's an example by hand:
class Mock(object):
pass
mock = Mock()
mock.error = False
mock.body = 'body'
on_b(mock)

I'm not quite sure what you are asking but the callback functions are just functions, and can be called like any other python function
on_g(some_object)
on_b(some_object)
You could import them and call them in tests.

Related

How do I have to mock patch in this use case?

The initial scenerio is writing tests for functions from a library (lib.py).
lib.py:
def fun_x(val):
# does something with val
return result
def fun(val):
x = fun_x(val)
# does seomthing with x
return result
test__lib.py
import pytest
import lib
def lib_fun_x_mocked(val):
return "qrs"
def test_fun():
assert lib.fun("abc") == "xyz"
But lib.fun_x() does something very expensive or requires a resource not reliably available or not determinisitc. So I want to subsitute it with a mock function such that when the test test_fun() is executed lib.fun() uses lib_fun_x_mocked() instead of fun_x() from its local scope.
So far I'm running into cryptic error messages when I try to apply mock/patch recipes.
You can use the built-in fixture monkeypatch provided by pytest.
import lib
def lib_fun_x_mocked(some_val): # still takes an argument
return "qrs"
def test_fun(monkeypatch):
with monkeypatch.context() as mc:
mc.setattr(lib, 'fun_x', lib_fun_x_mocked)
result = lib.fun('abc')
assert result == 'qrs'
Also as a side note, if you are testing the function fun you shouldn't be asserting the output of fun_x within that test. You should be asserting that fun behaves in the way that you expect given a certain value is returned by fun_x.

Python Requests Mock doesn't catch Timeout exception

I wrote a unittest to test timeout with the requests package
my_module.py:
import requests
class MyException(Exception): pass
def my_method():
try:
r = requests.get(...)
except requests.exceptions.Timeout:
raise MyException()
Unittest:
from mock import patch
from unittest import TestCase
from requests.exceptions import Timeout
from my_module import MyException
#patch('my_module.requests')
class MyUnitTest(TestCase):
def my_test(self, requests):
def get(*args, **kwargs):
raise Timeout()
requests.get = get
try:
my_module.my_method(...)
except MyException:
return
self.fail("No Timeout)
But when it runs, the try block in my_method never catches the requests.exceptions.Timeout
There are two problems I see here. One that directly fixes your problem, and the second is a slight misuse of the Mocking framework that further simplifies your implementation.
First, to directly address your issue, based on how you are looking to test your assertion, what you are actually looking to do here:
requests.get = get
You should be using a side_effect here to help raise your exception. Per the documentation:
side_effect allows you to perform side effects, including raising an
exception when a mock is called
With that in mind, all you really need to do is this:
requests.get.side_effect = get
That should get your exception to raise. However, chances are you might face this error:
TypeError: catching classes that do not inherit from BaseException is not allowed
This can be best explained by actually reading this great answer about why that is happening. With that answer, taking that suggestion to actually only mock out what you need will help fully resolve your issue. So, in the end, your code will actually look something like this, with the mocked get instead of mocked requests module:
class MyUnitTest(unittest.TestCase):
#patch('my_module.requests.get')
def test_my_test(self, m_get):
def get(*args, **kwargs):
raise Timeout()
m_get.side_effect = get
try:
my_method()
except MyException:
return
You can now actually further simplify this by making better use of what is in unittest with assertRaises instead of the try/except. This will ultimately just assert that the exception was raised when the method is called. Furthermore, you do not need to create a new method that will raise a timeout, you can actually simply state that your mocked get will have a side_effect that raises an exception. So you can replace that entire def get with simply this:
m_get.side_effect = Timeout()
However, you can actually directly put this in to your patch decorator, so, now your final code will look like this:
class MyUnitTest(unittest.TestCase):
#patch('my_module.requests.get', side_effect=Timeout())
def test_my_test(self, m_get):
with self.assertRaises(MyException):
my_method()
I hope this helps!
patch('my_module.requests') will replace my_module.requests with a new mock object, but in your test method you replace the requests.get method of the directly imported and therefore on the original requests module, which means that change is not reflected within your module.
It should work if in your test method you replace it on the requests mock within your my_module instead:
my_module.requests.get = get

Changing decorator params in unit tests [duplicate]

I have the following code that I'm trying to test:
great_report.py
from retry import retry
#retry((ReportNotReadyException), tries=3, delay=10, backoff=3)
def get_link(self):
report_link = _get_report_link_from_3rd_party(params)
if report_link:
return report_link
else:
stats.count("report_not_ready", 1)
raise ReportNotReadyException
I've got my testing function which mocks _get_report_link_from_3rd_party which tests everything but I don't want this function to actually pause execution during when I run tests..
#mock.patch('repo.great_report._get_report_link_from_3rd_party', return_value=None)
test_get_link_raises_exception(self, mock_get_report_link):
self.assertRaises(ReportNotReadyException, get_link)
I tried mocking the retry parameters but am running into issues where get_link keeps retrying over and over which causes long build times instead of just raising the exception and continuing. How can I mock the parameters for the #retry call in my test?
As hinted here, an easy way to prevent the actual sleeping is by patching the time.sleep function. Here is the code that did that for me:
#patch('time.sleep', side_effect = lambda _: None)
There is no way to change decorators parameters after load the module. Decorators decorate the original function and change it at the module load time.
First I would like encourage you to change your design a little to make it more testable.
If you extract the body of get_link() method test the new method and trust retry decorator you will obtain your goal.
If you don't want add a new method to your class you can use a config module that store variables that you use when call retry decorator. After that you can use two different module for testing and production.
The last way is the hacking way where you replace retry.api.__retry_internal by a your version that invoke the original one by changing just the variables:
import unittest
from unittest.mock import *
from pd import get_link, ReportNotReadyException
import retry
orig_retry_internal = retry.api.__retry_internal
def _force_retry_params(new_tries=-1, new_delay=0, new_max_delay=None, new_backoff=1, new_jitter=0):
def my_retry_internals(f, exceptions, tries, delay, max_delay, backoff, jitter, logger):
# call original __retry_internal by new parameters
return orig_retry_internal(f, exceptions, tries=new_tries, delay=new_delay, max_delay=new_max_delay,
backoff=new_backoff, jitter=new_jitter, logger=logger)
return my_retry_internals
class MyTestCase(unittest.TestCase):
#patch("retry.api.__retry_internal", side_effect=_force_retry_params(new_tries=1))
def test_something(self, m_retry):
self.assertRaises(ReportNotReadyException, get_link, None)
IMHO you should use that hacking solution only if you are with the back on the wall and you have no chance to redesign you code to make it more testable. The internal function/class/method can change without notice and your test can be difficult to maintain in the future.

Override a "private" method in a python module

I want to test a function in python, but it relies on a module-level "private" function, that I don't want called, but I'm having trouble overriding/mocking it. Scenario:
module.py
_cmd(command, args):
# do something nasty
function_to_be_tested():
# do cool things
_cmd('rm', '-rf /')
return 1
test_module.py
import module
test_function():
assert module.function_to_be_tested() == 1
Ideally, in this test I dont want to call _cmd. I've looked at some other threads, and I've tried the following with no luck:
test_function():
def _cmd(command, args):
# do nothing
pass
module._cmd = _cmd
although checking module._cmd against _cmd doesn't give the correct reference. Using mock:
from mock import patch
def _cmd_mock(command, args):
# do nothing
pass
#patch('module._cmd', _cmd_mock)
test_function():
...
gives the correct reference when checking module._cmd, although `function_to_be_tested' still uses the original _cmd (as evidenced by it doing nasty things).
This is tricky because _cmd is a module-level function, and I dont want to move it into a module
[Disclaimer]
The synthetic example posted in this question works and the described issue become from specific implementation in production code. Maybe this question should be closed as off topic because the issue is not reproducible.
[Note] For impatient people Solution is at the end of the answer.
Anyway that question given to me a good point to thought: how we can patch a method reference when we cannot access to the variable where the reference is?
Lot of times I found some issue like this. There are lot of ways to meet that case and the commons are
Decorators: the instance we would like replace is passed as decorator argument or used in decorator static implementation
What we would like to patch is a default argument of a method
In both cases maybe refactor the code is the best way to play with that but what about if we are playing with some legacy code or the decorator is a third part decorator?
Ok, we have the back on the wall but we are using python and in python nothing is impossible. What we need is just the reference of the function/method to patch and instead of patching its reference we can patch the __code__: yes I'm speaking about patching the bytecode instead the function.
Get a real example. I'm using default parameter case that is simple, but it works either in decorator case.
def cmd(a):
print("ORIG {}".format(a))
def cmd_fake(a):
print("NEW {}".format(a))
def do_work(a, c=cmd):
c(a)
do_work("a")
cmd=cmd_fake
do_work("b")
Output:
ORIG a
ORIG b
Ok In this case we can test do_work by passing cmd_fake but there some cases where is impossible do it: for instance what about if we need to call something like that:
def what_the_hell():
list(map(lambda a:do_work(a), ["c","d"]))
what we can do is patch cmd.__code__ instead of _cmd by
cmd.__code__ = cmd_fake.__code__
So follow code
do_work("a")
what_the_hell()
cmd.__code__ = cmd_fake.__code__
do_work("b")
what_the_hell()
Give follow output:
ORIG a
ORIG c
ORIG d
NEW b
NEW c
NEW d
Moreover if we want to use a mock we can do it by add follow lines:
from unittest.mock import Mock, call
cmd_mock = Mock()
def cmd_mocker(a):
cmd_mock(a)
cmd.__code__=cmd_mocker.__code__
what_the_hell()
cmd_mock.assert_has_calls([call("c"),call("d")])
print("WORKS")
That print out
WORKS
Maybe I'm done... but OP still wait for a solution of his issue
from mock import patch, Mock
cmd_mock = Mock()
#A closure for grabbing the right function code
def cmd_mocker(a):
cmd_mock(a)
#patch.object(module._cmd,'__code__', new=cmd_mocker.__code__)
test_function():
...
Now I should say never use this trick unless you are with the back on the wall. Test should be simple to understand and to debug ... try to debug something like this and you will become mad!

Pythonic way of replacing real return values and implementation of functions with mock ones

My flask+python app pulls json from native binaries via a third party module nativelib.
Essentially, my functions look like this
def list_users():
return nativelib.all_users()
However, strong dependency on this third party module and a huge native component is proving to be a huge impediment in rapid development.
What I would like to do is mock the return value of my list_users function.
Additionally, I should be able to switch back to 'real data' by simply toggling some boolean.
This boolean could be some attribute within code or some command line argument.
The solution I have currently devised looks something like this:
#mockable('users_list')
def list_users():
return nativelib.all_users()
I have implemented the aforementioned mockable as a class:
class mockable:
mock = False
# A dictionary that contains a mapping between the api method
# ..and the file which contains corresponding the mock json response
__mock_json_resps = {'users_list':'/var/mock/list_user.json', 'user_by_id': '/var/mock/user_1.json'}
def __init__(self, api_method):
self.api_method = api_method
def __call__(self, fn):
#wraps(fn)
def wrapper():
if mock:
# If mocking is enabled,read mock data from json file
mock_resp = __mock_json_resps[self.api_method]
with open(mock_resp) as json_file:
return json.load(json_file)
else:
return self.fn()
return wrapper
In my entry point module I can then enable mocking using
mockable.mock = True
Now although this might work, I am interested in knowing if this is the 'pythonic' way of doing it.
If not, what is the best way to achieve this?
Personally, I prefer using the mock library, despite it being an additional dependency. Also, mocking is used in my unittests only, so the actual code is not mixed with code for testing. So, in your place, I'd leave the function you have as it is:
def list_users():
return nativelib.all_users()
And then in my unittest, I would patch the native library "temporarily":
def test_list_users(self):
with mock.patch.object(nativelib,
'all_users',
return_value='json_data_here')):
result = list_users()
self.assertEqual(result, 'expected_result')
In addition, for writing multiple tests, I usually turn the mocking code into a decorator, like:
def mock_native_all_users(return_value):
def _mock_native_all_users(func):
def __mock_native_all_users(self, *args, **kwargs):
with mock.patch.object(nativelib,
'all_users',
return_value=return_value):
return func(self, *args, **kwargs)
return __mock_native_all_users
return _mock_native_all_users
and then use it:
#mock_native_all_users('json_data_to_return')
def test_list_users(self):
result = list_users()
self.assertEqual(result, 'expected_result')
You set mockable.mock in your entry code, that is you don't dynamically switch it on and off at different times during a single run. So I'm inclined to think of this as a dependency configuration issue. This isn't necessarily better, just a slightly different way of framing the problem.
If you aren't bothered about inspection on nativelib.all_users then you don't need to touch list_users, just replace its dependency. For example as a quick one-shot if nativelib is only imported in one place:
if mock:
class NativeLibMock:
def all_users(self):
return whatever # maybe configure the mock with init params
nativelib = NativeLibMock()
else:
import nativelib
If you are bothered about inspection then there's a potential problem with the extra self parameter and so on. In practice you'd probably not want to duplicate my code above or have different instances of NativeLibMock kicking around in different modules that depend on nativelib. If you're defining a module anyway to contain your mock object then you might as well just implement a mock module.
So you can of course re-implement the module to return the mock data and either adjust PYTHONPATH (or sys.path) according to whether you want the mock, or do import mocklib as nativelib. Which you choose depends on whether you want to mock all uses of nativelib or just this one. Basically, the same thing you'd do during rapid development if the reason that you want to mock nativelib is because it hasn't been written yet ;-)
Consider how important it is that the specification of 'users_list' as the tag for the mock data for list_users, is located with list_users. If it's useful then decorating list_users is a benefit, but you still might want to just provide a different definition of the decorator in the mock vs non-mock cases. The non-mock "decorator" could just return fn instead of wrapping it.

Categories

Resources