How to mock flask.request with unittest? - python

In my application I have a class that is called from a flask service. This class takes some attributes from the flask.request object, so I want to mock them.
An example of the implementation that I have is:
myClassHelper.py
from flask import request
class MyClassHelper:
def __init__(self, addRequestData=False):
self.attribute = 'something'
self.path = request.path if addRequestData else None
def __str__(self):
return 'attribute={0}; path={1};'.format(self.attribute, self.path)
myClassHelperTest.py
from unittest import TestCase
from unittest.mock import MagicMock
import flask
from myClassHelper import MyClassHelper
class MyClassHelperTest(TestCase):
def setUp(self):
self.path = '/path'
self.unmock = {}
self.unmock['flask.request'] = flask.request
flask.request = MagicMock(path='/path')
def tearDown(self):
flask.request = self.unmock['flask.request']
def test_printAttributes(self):
expectedResult = 'attribute=something; path={0};'.format(self.path)
result = str(MyClassHelper(addRequestData=True))
self.assertEqual(expectedResult, result)
The problem comes when I do the import from myClassHelper import MyClassHelper. This goes to the import from flask import request inside MyClassHelper. So the mock in the setUp method of the test class, it's not being applied.
This can be solved by just importing flask and accessing to the path attribute like flask.request.path. But I would like to avoid importing the full flask module.
Is there any way to create a unit test for a method that uses attributes from flask.request, mocking them and without using the flask test client?

There must be a way but unit testing code like this is going to cause you troubles anyway. The SUT is accessing global state that is managed by another module, thus your tests need to properly set up that global state. This can be done either by using that another module as is, which you don't want for good reasons (plus it wouldn't be unit testing anymore), or by monkey-patching it. This is often tricky (as you already found out) and brittle (your tests will break if you change the way you import things in the production code; why should that happen if the relevant behavior has not changed?)
The fix for this kind of problems is making your objects ask for the things they need instead of looking for them in global state. So if all an instance of MyClassHelper needs is a path, just make it ask for a path. Let the calling code figure out where the path should come from. Specifically your tests can easily provide canned paths.
This is how your test would look if you follow this principle:
class MyClassHelperTest(TestCase):
def test_printAttributes(self):
expectedResult = 'attribute=something; path=/path;'
result = str(MyClassHelper('/path'))
self.assertEqual(expectedResult, result)
Much simpler than before. And this is how you make it pass:
class MyClassHelper:
def __init__(self, path):
self.attribute = 'something'
self.path = path
def __str__(self):
return 'attribute={0}; path={1};'.format(self.attribute, self.path)
You do not really need attribute if the behavior in the test is all you want. I left it there in order to deviate less from your original code. I assume you have other tests that show why it is actually needed.

Related

How do i mock an external libraries' classes/functions such as yaml.load() or Box() in python

How would i go about testing the following class and its functions?
import yaml
from box import Box
from yaml import SafeLoader
class Config:
def set_config_path(self):
self.path = r"./config/datasets.yaml"
return self.path
def create_config(self):
with open(r"./config/datasets.yaml") as f:
self.config = Box(yaml.load(f, Loader=SafeLoader))
return self.config
These are the current tests I have created so far, but i am struggling with the final function:
import unittest
from unittest.mock import mock_open, patch
from src.utils.config import Config
class TestConfig(unittest.TestCase):
def setUp(self):
self.path = r"./config/datasets.yaml"
def test_set_config_path(self):
assert Config.set_config_path(self) == self.path
#patch("builtins.open", new_callable=mock_open, read_data="data")
def test_create_config(self, mock_file):
assert open(self.path).read() == "data"
How would i go about testing/mocking the Box() and yaml.load() methods.
I have tried mocking where the Box and yaml.load() functions are used in the code - however i dont fully understand how this works.
Ideally I'd want to be able to pass a fake file to the with open() as f:, which then is read by Box and yaml.load to output a fake dictionary config.
Thanks!
The thing to remember about unit tests is that the goal is to test public interfaces of YOUR code. So to mock a third parties code is not really a good thing to do though in python it can be done but would be alot of monkey patching and other stuff.
Also creating and deleting files in a unit test is fine to do as well. So you could just create a test version of the yaml file and store it in the unit tests directory. During a test load the file and then do assertions to check that it was loaded properly and returned.
You wouldn't do a unit test checking if Box was initialized properly cause that should be in another test or test case. Unless its a third party then you would have to make sure it was initialized properly cause it's not your code.
So create a test file, open it and load it as yaml then pass it into Box constructor. Do assertions to make sure those steps completed properly. No need to mock yaml or Box.

Mocking Classes in Python with unittest

I'm trying to make a mock of a class with unittest.mock.
I have the following structure of packages and modules:
services
service_one.py
repository
repository_mongodb.py
test
services
test_service_one.py
The class repository_mongodb.py is used inside the class service_one.py (by importing the class).
This is the code for the files.
File repository_mongodb.py
class RepositoryMongoDB:
def __init__(self):
self.library = []
def save(self, thing):
self.library.append(thing)
return True
File service_one.py
from repository.RepositoryBookList import RepositoryBookList
class ServiceRodri:
def __init__(self):
self.repository = RepositoryMongoDB()
def save_service(self, thing):
# Do validation or something else
return self.repository.save(thing)
Now I want to try to make a mock of the class ServiceRodri, and here's what I do.
import unittest
from unittest.mock import patch
from service.service_one import ServiceOne
class ServiceOneTest(unittest.TestCase):
def setUp(self):
self.service_one = ServiceOne()
#patch('service.service_one.RepositoryMongoBD')
def test_get_one_book_if_exists_decorator(self, mock_repo):
mock_repo.save.return_value = "call mock"
result = self.serviceRodri.save_service("")
self.assertEquals("call mock", result)
I want that when I call the method "save" of the class RepositoryMongoBD to return the result I'm assigned to it. But this doesn't happen.
I've also tried to do it this way.
#patch('repository.repository_mongodb.RepositoryMongoDB')
def test_get_one_book_if_exists_decorator(self, mock_repo):
mock_repo.save.return_value = "call mock"
result = self.serviceRodri.save_service("")
self.assertEquals("call mock", result)
But it doesn't work either.
But if I try to mock the function save() this way.
#patch('service.service_one.RepositoryMongoDB.save')
def test_get_one_book_if_exists_decorator_2(self, mock_repo):
mock_repo.return_value = "call mock"
result = self.serviceRodri.save_service("")
self.assertEquals("call mock", result)
Works correctly!!! I understand what it's doing is when the call save() is found in service_one module, it is replaced by the mock.
What would be the right way to do it? (and the best way)
I'm a begginer in the world of python. I have searched and read many post, but all the examples shown are very easy (like sum() method). I've tested in other languages but never in Python.
If you insist on using patch, your final attempt is the right way.
The reason the previous attempts don't work is because the patch is applied after the import, so you're not patching the object you think you are.
I've been working with patch for a long time and this still bites me occasionally. So I recommended using simple constructor based dependency injection.
In service_one
class ServiceOne:
def __init__(self, respository):
self.repository
Initialize this with service_rodri = ServiceRodri(RepositoryMongoDB()), maybe in the __init__ file or something. Then in your test, you can create this mock in your setup.
class ServiceOneTest(unittest.TestCase):
def setUp(self):
self.repository = MagicMock()
self.service_one = ServiceOne(self.repository)
N.B. patching vs. dependency injection:
Patching will also couple the tests to the import structure of your program. This makes safe refactoring that alters the structure of your modules strictly more difficult. It's best used with legacy code when you need to get some tests in place before making changes.

Is it possible to use a fixture in another fixture and both in a test?

I'm mocking out an API using unittest.mock. My interface is a class that uses requests behind the scene. So I'm doing something like this:
#pytest.fixture
def mocked_api_and_requests():
with mock.patch('my.thing.requests') as mock_requests:
mock_requests.post.return_value = good_credentials
api = MyApi(some='values', that='are defaults')
yield api, mock_requests
def test_my_thing_one(mocked_api_and_requests):
api, mocked_requests = mocked_api_and_requests
... # some assertion or another
def test_my_thing_two(mocked_api_and_requests):
api, mocked_requests = mocked_api_and_requests
... # some other assertions that are different
As you can probably see, I've got the same first line in both of those tests and that smells like it's not quite DRY enough for me.
I'd love to be able to do something like:
def test_my_thing_one(mock_requests, logged_in_api):
mock_requests.get.return_value = ...
Rather than have to unpack those values, but I'm not sure if there's a way to reliably do that using pytest. If it's in the documentation for fixtures I've totally missed it. But it does feel like there should be a right way to do what I want to do here.
Any ideas? I'm open to using class TestGivenLoggedInApiAndMockRequests: ... if I need to go that route. I'm just not quite sure what the appropriate pattern is here.
It is possible to achieve exactly the result you want by using multiple fixtures.
Note: I modified your example minimally so that the code in my answer is self-contained, but you should be able to adapt it to your use case easily.
In myapi.py:
import requests
class MyApi:
def get_uuid(self):
return requests.get('http://httpbin.org/uuid').json()['uuid']
In test.py:
from unittest import mock
import pytest
from myapi import MyApi
FAKE_RESPONSE_PAYLOAD = {
'uuid': '12e77ecf-8ce7-4076-84d2-508a51b1332a',
}
#pytest.fixture
def mocked_requests():
with mock.patch('myapi.requests') as _mocked_requests:
response_mock = mock.Mock()
response_mock.json.return_value = FAKE_RESPONSE_PAYLOAD
_mocked_requests.get.return_value = response_mock
yield _mocked_requests
#pytest.fixture
def api():
return MyApi()
def test_requests_was_called(mocked_requests, api):
assert not mocked_requests.get.called
api.get_uuid()
assert mocked_requests.get.called
def test_uuid_is_returned(mocked_requests, api):
uuid = api.get_uuid()
assert uuid == FAKE_RESPONSE_PAYLOAD['uuid']
def test_actual_api_call(api): # Notice we don't mock anything here!
uuid = api.get_uuid()
assert uuid != FAKE_RESPONSE_PAYLOAD['uuid']
Instead of defining one fixture that returns a tuple, I defined two fixtures, which can independently be used by the tests. An advantage of composing fixtures like that is that they can be used independently, e.g. the last test actually calls the API, simply by virtue of not using mock_requests fixture.
Note that -- to answer the question title directly -- you could also make mocked_requests a prerequisite of the api fixture by simply adding it to the parameters, like so:
#pytest.fixture
def api(mocked_requests):
return MyApi()
You will see that it works if you run the test suite, because test_actual_api_call will no longer pass.
If you make this change, using the api fixture in a test will also mean executing it in the context of mocked_requests, even if you don't directly specify the latter in your test function's arguments. It's still possible to use it explicitly, e.g. if you want to make assertions on the returned mock.
If you can not afford to easily split your tuple fixture into two independent fixtures, you can now "unpack" a tuple or list fixture into other fixtures using my pytest-cases plugin as explained in this answer.
Your code would look like:
from pytest_cases import pytest_fixture_plus
#pytest_fixture_plus(unpack_into="api,mocked_requests")
def mocked_api_and_requests():
with mock.patch('my.thing.requests') as mock_requests:
mock_requests.post.return_value = good_credentials
api = MyApi(some='values', that='are defaults')
yield api, mock_requests
def test_my_thing_one(api, mocked_requests):
... # some assertion or another
def test_my_thing_two(api, mocked_requests):
... # some other assertions that are different

Python: mocks in unittests

I have situation similar to:
class BaseClient(object):
def __init__(self, api_key):
self.api_key = api_key
# Doing some staff.
class ConcreteClient(BaseClient):
def get_some_basic_data(self):
# Doing something.
def calculate(self):
# some staff here
self.get_some_basic_data(param)
# some calculations
Then I want to test calculate function using mocking of get_some_basic_data function.
I'm doing something like this:
import unittest
from my_module import ConcreteClient
def my_fake_data(param):
return [{"key1": "val1"}, {"key2": "val2"}]
class ConcreteClientTest(unittest.TestCase):
def setUp(self):
self.client = Mock(ConcreteClient)
def test_calculate(self):
patch.object(ConcreteClient, 'get_some_basic_data',
return_value=my_fake_data).start()
result = self.client.calculate(42)
But it doesn't work as I expect.. As I thought, self.get_some_basic_data(param) returns my list from my_fake_data function, but it looks like it's still an Mock object, which is not expected for me.
What is wrong here?
There are two main problems that you are facing here. The primary issue that is raising the current problem you are experiencing is because of how you are actually mocking. Now, since you are actually patching the object for ConcreteClient, you want to make sure that you are still using the real ConcreteClient but mocking the attributes of the instance that you want to mock when testing. You can actually see this illustration in the documentation. Unfortunately there is no explicit anchor for the exact line, but if you follow this link:
https://docs.python.org/3/library/unittest.mock-examples.html
The section that states:
Where you use patch() to create a mock for you, you can get a
reference to the mock using the “as” form of the with statement:
The code in reference is:
class ProductionClass:
def method(self):
pass
with patch.object(ProductionClass, 'method') as mock_method:
mock_method.return_value = None
real = ProductionClass()
real.method(1, 2, 3)
mock_method.assert_called_with(1, 2, 3)
The critical item to notice here is how the everything is being called. Notice that the real instance of the class is created. In your example, when you are doing this:
self.client = Mock(ConcreteClient)
You are creating a Mock object that is specced on ConcreteClient. So, ultimately this is just a Mock object that holds the attributes for your ConcreteClient. You will not actually be holding the real instance of ConcreteClient.
To solve this problem. simply create a real instance after you patch your object. Also, to make your life easier so you don't have to manually start/stop your patch.object, use the context manager, it will save you a lot of hassle.
Finally, your second problem, is your return_value. Your return_value is actually returning the uncalled my_fake_data function. You actually want the data itself, so it needs to be the return of that function. You could just put the data itself as your return_value.
With these two corrections in mind, your test should now just look like this:
class ConcreteClientTest(unittest.TestCase):
def test_calculate(self):
with patch.object(ConcreteClient, 'get_some_basic_data',
return_value=[{"key1": "val1"}, {"key2": "val2"}]):
concrete_client = ConcreteClient(Mock())
result = concrete_client.calculate()
self.assertEqual(
result,
[{"key1": "val1"}, {"key2": "val2"}]
)
I took the liberty of actually returning the result of get_some_basic_data in calculate just to have something to compare to. I'm not sure what your real code looks like. But, ultimately, the structure of your test in how you should be doing this, is illustrated above.

Access parent object from submodule

I'm writing a module for accessing some data from a local SQLite Database, and would like the end-user functionality to work like this:
import pyMyDB
myDBobj = pyMyDB.MyDB( '/path/to/DB/file' ) # Instance of the class MyDB
User has simply created an object that connects to a database.
Then, depending on the need, I want to have different submodules the user can access to work on the data from the database. For example, Driving provides certain functions while Walking provides others, which I'd love to be accessed like so:
result = myDBobj.Driving.GetTachometerReading()
or
result = myDBobj.Walking.GetShoeType()
My question is about how to make the submodules Driving & Walking have access to the parent Object (not just the Parent module's class/functions, but the instantiated object).
If I make my file heirarchy something like this:
pyMyDB/
__init__.py # imports the below modules as so:
MyDB_class.py # imported as *, contains MyDB class definition
Driving.py # imported normally via `import Driving`
Walking.py # imported normally
where __init__.py imports MyDB_class.py (which contains the Database class) and also imports Driving/Walking.py, then
the user can do pyMyDB.Driving.some_func(), but some_func() won't actually have access to an instantiated MyDB object, right?
Has anyone come across a way to have the sub-modules access the instantiated object?
Here is my current solution. It seems overly complicated.
First, I have to use an additional globals.py file to overcome circular module imports (Like this hint).
In the Driving module I make a new class called Drv, which is used to make a duplicate of the MyDBobj, but also has the extra methods I want, such as Driving.GetTachometerReading().
Driving.py:
class DrvClass( MyDB ):
def __init__(self, MyDBobj):
self.attribute1 = MyDBobj.attr1
self.attribute2 = MyDBobj.attr2
.... # copy all attribute values!
def GetTachometerReading(self):
.... #some function here
Then, to accomplish the sub-indexing (MyDBobj.Driving.GetTach()), from within the Driving.py module, I add a Driving() method to the MyDB class via
def temp_driving(self):
return DrvClass( self ) # pass the MyDBobj
MyDB.Driving = temp_driving # add the above method to the MyDB class
So now a user can do: MyDBobj.Driving().GetTachometerReading(), where MyDBobj.Driving() returned the new DrvClass object that has the new GetTachometer() function. I don't like the fact that I must call Driving() as a function.
What do you think - is there a better/simpler way?
Btw the reason I want the MyDB class to be separate is because our access methods may change, without changing the analysis functions (Driving/Walking), or vice-versa. Thus I don't want to just add the MyDB access techniques directly to separate Driving & Walking modules.
Thanks in advance for your sage advice!
I think I might use a different access approach on the client side. If the clients used a protocol like this:
result = myDBobj.Driving_GetTachometerReading()
then it's straightforward to add this name into the class in the sub-module Driving.py:
import pyMyDB
def GetTachometerReading(self):
# self will be the MyDB instance
pyMyDB.MyDB.Driving_GetTachometerReading = GetTachometerReading
However, if you are set on your approach then I think it could be improved.As it is, you create a new instance of Driving for every call to a Driving function which is not good. However, we do need to create an instance when the MyDB itself is instantiated. Yet that seems tricky because at that time we don't seem to know which sub-modules to include. I have fixed that by having each sub-module inject a list of its methods into MyDB. Like this:
pyMyDB:
import functools
class MyDB(object):
submodule_methods = {}
def __init__(self):
for k,v in self.submodule_methods.items():
self.__dict__[k] = SubmodulePointer(self,v)
self.db_stuff = "test"
class SubmodulePointer(object):
def __init__(self,db,methods):
self.db = db
for name,func in methods.items():
self.__dict__[name] = functools.partial(func,db)
Then for each of the sub-modules, eg Driving we do:
import pyMyDB
def GetTachometerReading(self):
# self here is the db instance, as if this was a method on the db
print(self.db_stuff)
pyMyDB.MyDB.submodule_methods["Driving"] = {"GetTachometerReading":GetTachometerReading}
Then the client can just do:
import pyMyDB
import Driving
m = pyMyDB.MyDB()
m.Driving.GetTachometerReading()
As you wanted. A simpler approach, with a slightly more conventional style would be to make Driving a class of its own:
pyMyDB:
import functools
class MyDB(object):
submodule_classes = []
def __init__(self):
for c in self.submodule_classes:
self.__dict__[c.__name__] = c(self)
self.db_stuff = "test"
Then for each of the sub-modules, eg Driving we do:
import pyMyDB
class Driving(object):
def __init__(self,db):
self.db = db
def GetTachometerReading(self):
# self here is the Driving instance,
# but we have access to the db
print(self.db.db_stuff)
pyMyDB.MyDB.submodule_classes.append(Driving)
This means that the methods in the Driving class don't look like methods on MyDB but that could be an advantage. I'm not sure if that was something you wanted.

Categories

Resources