I have a service class that connections to AWS S3. The connection uses boto3 within the __init__() method. I would like to mock this to use a moto s3 instance I've defined in a fixture, but I just can't get the mock to do anything.
Let's say I have a service class that looks like this:
import boto3
class S3Storage:
def __init__(self):
self._s3 = boto3.resource('s3')
def do_download(self):
self._s3 .download_file(
Key='file.txt',
Bucket='mybucket',
Filename='path/to/destination/file.txt',
)
and then I create a conftest file that has these moto fixtures:
# Fixtures
#pytest.fixture(scope='function')
def mocked_s3r():
with mock_s3():
yield boto3.resource('s3')
#pytest.fixture(scope='function')
def mocked_s3client():
with mock_s3():
yield boto3.client('s3')
#pytest.fixture(scope='function', autouse=True)
def upload_s3_resources(mocked_s3client, s3files):
mocked_s3client.create_bucket(Bucket='mybucket')
mocked_s3client.upload_file(
Filename='path/to/destination/file.txt',
Bucket='mybucket',
Key='file.txt',
)
The bottom fixture will grab a local file and place it in the moto s3 instance, which can be accessed from the mocked_s3r client mock.
My problem is that I cannot make a successful patch for the S3Storage._s3 attribute that holds the boto resource (I know I'm mixing boto clients and resources here, but I don't think that's causing the issue).
So I tried writing some fixtures to patch (using pytest-mock) or monkeypatch the boto resource and/or client.
# This is what I can't make work...
#pytest.fixture(autouse=True)
def mocked_s3(mocked_s3client, mocker):
mocker.patch('app.utils.s3_storage.boto3.resource', return_value=mocked_s3r)
return mocked_s3client
# This other approach also doesn't work...
#pytest.fixture(autouse=True)
def mocked_s3(mocked_s3client, mocker):
mocker_s3storage = mocker.patch('app.utils.s3_storage.boto3.resource')
mocker_s3storage()._s3 = mocked_s3client
return mocked_s3client
# Nor this...
#pytest.fixture(autouse=True)
def mocked_s3(mocked_s3client, monkeypatch):
monkeypatch.setattr('app.utils.s3_storage.S3Storage._s3', mocked_s3client)
return mocked_s3client
But nothing works. I think I might be fundamentally misunderstanding how to patch an attribute that belongs to an instance of a class.
I'd rather do all this in a fixture, not in each individual test, such that I can write a test like:
def test_download_file(mocked_s3client):
s3storage = S3Storage()
s3storage._s3 # This should be a mock object, but it just connects to the real AWS
s3storage.do_download()
and I don't have to specify the mock each time.
It is not necessary to patch the client, in order for Moto to work. As long as clients/resources are created while the mock is active, they are automatically patched.
Using your example fixtures, the following test works:
def test_file_exists(upload_s3_resources):
S3Storage().do_download()
test_file_exists() # todo: actually verify somethign happened
Note that the download-file call in your logic should be slightly modified, but I'm assuming that was just a example to keep things simple.
I had to change to the following to get the test to succeed:
self._s3.Bucket('mybucket').download_file('file.txt', 'test.txt')
Related
I have a function in my code that is being used by fastapi to provide a db session to the endpoints:
def get_db() -> Generator[Session, None, None]:
try:
db = SessionLocal()
yield db
finally:
db.close()
I want to use the same function as a pytest fixture. If I do something like the following, the fixture is not being recognized:
pytest.fixture(get_db, name="db", scope="session")
def test_item_create(db: Session) -> None:
...
test_item_create throws an error about db not being a fixture: fixture 'db' not found.
So I can rewrite get_db in my conftest.py and wrap it with pytest.fixture and get things working, but I was wondering if there's a better way of reusing existing functions as fixtures. If I have more helper functions like get_db, it'd be nice not to have rewrite them for tests.
I think pytest cannot find the fixture as things are written in your example. Maybe you are trying to get to something like this?
db = pytest.fixture(get_db, name="db", scope="session")
def test_item_create(db: Session) -> None:
...
The recommended way to use httpx.Client() is as a context manager that will ensure the connections get properly cleaned-up upon exiting the with block.
But let us suppose I want to write a class that will instantiate an httpx.Client() session that can be reused throughout our code without having to put my entire script inside a with block.
class APIWrapper:
def __init__(self):
self.session = httpx.Client()
self.token = fetch_oauth_token()
def fetch_oauth_token(self, **kwargs):
r = self.session.get(endpoint)
# Perform an authorization_code flow.
return token
def get(self, endpoint):
r = self.session.get(endpoint, headers=self.headers)
return r
def __exit__(self):
self.session.close()
api = APIWrapper()
api.get('https://api.some.url/statistics?location=worldwide')
<1000 lines of code>
api.get('https://api.some.url/users?location=denver')
In the illustrative example above I'm hoping to use a session for the API's OAuth authentication flow and later it can be re-used for any API calls that the user makes.
Is this a legit way to go about things or is it not a great idea? Would it be better to use separate sessions and force the user to use a with context manager for their own calls?
While searching I have seen that a session needs to be closed under the class __exit__ function. Is using the __exit__ function sufficient for ensuring proper clean-up (even if exceptions were to occur)? Is it equivalent to using the with-block way of doing it?
I have written a flask application that uses flask dance for user authentication. Now I want to test a few views for that I have enabled #login_required.
I wanted to follow the flask dance testing docs but I could not get it to work. Because I am only using unittest and not pytest. I also use github and not google as in the docs. So is sess['github_oauth_token'] correct? A prototype sample test could look like the following:
def test_sample(self):
with self.client as client:
with client.session_transaction() as sess:
sess['github_oauth_token'] = {
'access_token': 'fake access token',
'id_token': 'fake id token',
'token_type': 'Bearer',
'expires_in': '3600',
'expires_at': self.time + 3600
}
response = client.post(url_for('core.get_sample'), data=self.fake_sample)
self.assertRedirects(response, url_for('core.get_sample'))
The assertRedirect fails because I am redirected to the login page http://localhost/login/github?next=%2Fsample%2F and not url_for('core.get_sample').
Then tried to simply disable it, by following the official flask login docs.
It can be convenient to globally turn off authentication when unit
testing. To enable this, if the application configuration variable
LOGIN_DISABLED is set to True, this decorator will be ignored.
But this does not work as well, the test still fail because login_required is somehow executed.
So my questions are:
Because I am using github and not google as in the docs is github_oauth_token the correct key for the session?
How do I test views that have the #login_required decorator with unittest when using Flask Dance?
Edit: LOGIN_DISABLED=True works as long as I define it in my config class I use for app.config.from_object(config['testing']), what did not work was to set self.app.config['LOGIN_DISABLED'] = True in my setup method.
Even if you're using the unittest framework for testing instead of pytest, you can still use the mock storage classes documented in the Flask-Dance testing documentation. You'll just need to use some other mechanism to replace the real storage with the mock, instead of the monkeypatch fixture from Pytest. You can easily use the unittest.mock package instead, like this:
import unittest
from unittest.mock import patch
from flask_dance.consumer.storage import MemoryStorage
from my_app import create_app
class TestApp(unittest.TestCase):
def setUp(self):
self.app = create_app()
self.client = self.app.test_client()
def test_sample(self):
github_bp = self.app.blueprints["github"]
storage = MemoryStorage({"access_token": "fake-token"})
with patch.object(github_bp, "storage", storage):
with self.client as client:
response = client.post(url_for('core.get_sample'), data=self.fake_sample)
self.assertRedirects(response, url_for('core.get_sample'))
This example uses the application factory pattern, but you could also import your app object from somewhere else and use it that way, if you want.
I have a class which goes out to an external server and, naturally, retrieves data for which I do some processing. I really need to unit test this whole thing, but I do not want to keep hitting the external server every time I make my request. My question: What is the proper protocol for this? I'm using lettuce, but I'm also open to other ideas.
Here is my class:
## in my class
class SomeClass:
def doHttpGet(self):
## return request from http://somewhere.com
What should happen is I override doHttpGet...
class SomeClass:
def doHttpGet(self):
## return { "some": "data", "which": "mocks" }
Use mock objects for testing networking.
Wrap your network API with a class, e.g. NetworkManager. Its method makeRequest() will make the real HTTP request.
class NetworkManager:
def makeRequest(self):
# Do an HTTP request to the server, return response
Unit tests must use a mock class of NetworkManager, which overrides makeRequest() method and returns needed testing data without touching the network:
class MockNetworkManager(NetworkManager):
def makeRequest(self):
return testingResponse
Your SomeClass class must make HTTP requests via the network manager class. In production environment, it will use an instance of NetworkManager and do real network requests. In testing environment, SomeClass will use an instance of MockNetworkManager.
class SomeClass:
def __init__(self, network):
network = network
def doHttpGet(self):
response = self.network.makeRequest()
In production code:
some = SomeClass(NetworkManager())
some.doHttpGet()
In unit tests:
some = SomeClass(MockNetworkManager())
some.doHttpGet()
Using tornado, I want to create a bit of middleware magic that ensures that my SQLAlchemy sessions get properly closed/cleaned up so that objects aren't shared from one request to the next. The trick is that, since some of my tornado handlers are asynchronous, I can't just share one session for each request.
So I am left trying to create a ScopedSession that knows how to create a new session for each request. All I need to do is define a scopefunc for my code that can turn the currently executing request into a unique key of some sort, however I can't seem to figure out how to get the current request at any one point in time (outside of the scope of the current RequestHandler, which my function doesn't have access to either).
Is there something I can do to make this work?
You might want to associate the Session with the request itself (i.e. don't use scopedsession if it's not convenient). Then you can just say, request.session. Still needs to have hooks at the start/end for setup/teardown.
edit: custom scoping function
def get_current_tornado_request():
# TODO: ask on the Tornado mailing list how
# to acquire the request currently being invoked
Session = scoped_session(sessionmaker(), scopefunc=get_current_tornado_request)
(This is a 2017 answer to a 2011 question) As #Stefano Borini pointed out, easiest way in Tornado 4 is to just let the RequestHandler implicitly pass the session around. Tornado will track the handler instance state when using coroutine decorator patterns:
import logging
_logger = logging.getLogger(__name__)
from sqlalchemy import create_engine, exc as sqla_exc
from sqlalchemy.orm import sessionmaker, exc as orm_exc
from tornado import gen
from tornado.web import RequestHandler
from my_models import SQLA_Class
Session = sessionmaker(bind=create_engine(...))
class BaseHandler(RequestHandler):
#gen.coroutine
def prepare():
self.db_session = Session()
def on_finish():
self.db_session.close()
class MyHander(BaseHandler):
#gen.coroutine
def post():
SQLA_Object = self.db_session.query(SQLA_Class)...
SQLA_Object.attribute = ...
try:
db_session.commit()
except sqla_exc.SQLAlchemyError:
_logger.exception("Couldn't commit")
db_session.rollback()
If you really really need to asynchronously reference a SQL Alchemy session inside a declarative_base (which I would consider an anti-pattern since it over-couples the model to the application), Amit Matani has a non-working example here.