Python built-in fixtures - python

I'm trying to run a pytest which uses the following function:
def storage_class(request):
def fin():
sc.delete()
request.addfinalizer(fin)
logger.info("Creating storage")
data = {'api_version': 'v1', 'kind': 'namespace'}
# data is ususally loaded from yaml template
sc = OCS(**data)
return sc
I cannot find in the project any fixture named "request" so I assume it's a built-in fixture. I have however searched it in the docs, but I cannot find a "request" build-in fixture: https://docs.pytest.org/en/latest/builtin.html
Anybody can shed some light on this (builtin?) fixture?
Thanks!

request fixture helps to get information about the context.
more on request fixture.
Example for request fixture.
The most common usage from request fixture is addfinalizer and config
And if you only need a teardown functionality, you can simply use a yield and get rid of the request fixture.
#pytest.fixture()
def storage_class():
logger.info("Creating storage")
data = {'api_version': 'v1', 'kind': 'namespace'}
sc = OCS(**data)
yield sc
# Any code after yield will give you teardown effect
sc.delete()

Related

How can I properly test 2 parameters function using Pytest?

I' trying to properly test this simple function:
def get_content_from_header(request, header_name):
try:
content = request.headers[header_name]
except KeyError:
logging.error(f"BAD REQUEST: '{header_name}' header is missing from the request.")
except AttributeError:
logging.error(f"BAD REQUEST: request has no attributes 'headers'.")
else:
return content
return None
So this is my code so far, I'm using parametrize along with fixture to achieve my goal:
import main
import pytest
class ValidRequest:
def __init__(self):
self.headers = {
'Authorization': 'test_auth'
}
#pytest.fixture
def mocked_request():
request = ValidRequest()
return request
#pytest.mark.parametrize("possible_input, expected_output",
[('Authorization', 'test_auth'),
('InvalidHeader', None)])
def test_get_content_from_header(mocked_request, possible_input, expected_output):
# Run the function with mocked request
assert main.get_content_from_header(mocked_request, possible_input) == expected_output
Here's my problem: I only test the second parameter of the function get_content_from_header, not request which is the first one. How could I properly do that ?
Should I create a new class InvalidRequest and test my function with this new class in a new testing function just below test_get_content_from_header ?
Or should I add this new parameter trough parametrize in the existing testing function ?
What is the cleanest (more pythonic) way to do it ?
I would suggest a little change here, lets simplify that function a bit. Since that we are getting a certain header from the headers dict of the request we can just pass just the headers dict instead of the whole request.
def get_content_from_header(headers: dict, header_name: str):
if header_name in headers.keys():
return headers[header_name]
return None
This works the same way as your function, and you do not have to test your request parameter. Now you can test that in a very simple manner:
def test_get_content_from_header_returning_header_value():
headers = {"Authorization": "test_auth"}
assert get_content_from_header(headers, "Authorization") == "test_auth"
def test_get_content_from_header_returning_none():
headers = {"Authorization": None}
assert get_content_from_header(headers, "Authorization") == None
Now you don't need to test your request in that test, you can refer to https://flask.palletsprojects.com/en/2.0.x/testing/ and more specifically the client usage and test your endpoints, to test your request param.
Now about the loggers, I will usually place those in the place where you actually call the get_content_from_header function.

How to handle session which is generated in the setup and teardown methods in pytest in another file test_1.py

def setup_method(usern,pwd):
global token,session
inputdata=''
session = requests.Session()
inputdata={
"username":"XXXXXt",
"password":"<XXXxx"
}
response = session.post(config.login_url,data=inputdata,headers=config.api_headers)
token = json.loads(response.text).get('token')
config.api_headers["X-CSRF-Token"]=json.loads(response.text).get('token')
def teardown_method():
inputdata=''
config.api_headers["X-CSRF-Token"]=token
session.post(config.logout_url,data=inputdata,headers=config.api_headers)
#print("logout:",token)
#assert (json.loads(response.text)).get('ResponseStatus') in "SUCCESS"
How to handle session which is generated in the setup and teardown methods in pytest in another file test_1.py?
Please use conftest.py for the same - please refer to this documentation available at https://docs.pytest.org/en/2.7.3/plugins.html?highlight=re

In Django how to mock an object method called by views.py during its import?

I am writing System Tests for my Django app, where I test the complete application via HTTP requests and mock its external dependencies' APIs.
In views.py I have something like:
from external_service import ExternalService
externalService = ExternalService
data = externalService.get_data()
#crsf_exempt
def endpoint(request):
do_something()
What I want is to mock (or stub) ExternalService to return a predefined response when its method get_data() is called.
The problem is that when I run python manage.py test, views.py is loaded before my test class. So when I patch the object with a mocked one, the function get_data() was already called.
This solution didn't work either.
First off, don't call your method at import time. That can't be necessary, surely?
If get_data does something like a get request, e.g.
def get_data():
response = requests.get(DATA_URL)
if response.ok:
return response
else:
return None
Then you can mock it;
from unittest.mock import Mock, patch
from nose.tools import assert_is_none, assert_list_equal
from external_service import ExternalService
#patch('external_service.requests.get')
def test_getting_data(mock_get):
data = [{
'content': 'Response data'
}]
mock_get.return_value = Mock(ok=True)
mock_get.return_value.json.return_value = data
response = ExternalService.get_data()
assert_list_equal(response.json(), data)
#patch('external_service.requests.get')
def test_getting_data_error(mock_get):
mock_get.return_value.ok = False
response = ExternalService.get_data()
assert_is_none(response)
For this you'll need pip install nose if you don't already have it.

Django test VS pytest

I am new to django unittest and pytest. However, I started to feel that pytest test case is more compact and clearer.
Here is my test cases:
class OrderEndpointTest(TestCase):
def setUp(self):
user = User.objects.create_superuser(username='admin', password='password', email='pencil#gmail.com')
mommy.make(CarData, _quantity=1)
mommy.make(UserProfile, _quantity=1, user=user)
def test_get_order(self):
mommy.make(Shop, _quantity=1)
mommy.make(Staff, _quantity=1, shop=Shop.objects.first())
mommy.make(Order, _quantity=1, car_info={"color": "Black"}, customer={"name": "Lord Elcolie"},
staff=Staff.objects.first(), shop=Shop.objects.first())
factory = APIRequestFactory()
user = User.objects.get(username='admin')
view = OrderViewSet.as_view({'get': 'list'})
request = factory.get('/api/orders/')
force_authenticate(request, user=user)
response = view(request)
assert 200 == response.status_code
assert 1 == len(response.data.get('results'))
And here is the pytest version
def test_get_order(car_data, admin_user, orders):
factory = APIRequestFactory()
user = User.objects.get(username='admin')
view = OrderViewSet.as_view({'get': 'list'})
request = factory.get('/api/orders/')
force_authenticate(request, user=user)
response = view(request)
assert 200 == response.status_code
assert 1 == len(response.data.get('results'))
The benefit from pytest is fixture in another file. It makes my test case compact by let them be my input parameters.
Are they any benefit of using Django unittest than pytest?
Update: 1July2017
Update: 5July2017
Update: 1Sep2017
Update: 29Sep2017
Update: 26Dec2017
Pytest reduces your problem when fixtures got mutated over the test.
I got testcases that run individually passed, but fail when run
thoroughly.
Pytest will show you the assertion output if the error occur. Django
unittest does not. I have to put the breakpoint on my own and
investigate the error.
Pytest allow you to use real database with simple decorator. Django
test does not. You have to create your own customized command for
your job
Pytest is generic. Being an generic it means you feel comfortable to
work with project outside the Django. For example when you have to
build micro-service such as Flask + 3rd parties like APScheduler,
PyRad, ... etc. I mention this because my backend life uses Django 50%
The rest of the is Python and infra
Pytest is not using multiple inheritance to create my fixtures
Unittest takes advantage on gitlab-ci over Pytest when used with Docker as a runner by smoothly execute without any extra configurations. problem
i've used Django test for my entire life and now i am using Py.test. I agree that pytest is much cleaner than django itself.
The benefit from pytest is fixture in another file. It makes my test case compact by let them be my input parameters.
In Django unittest you can still use fixtures in other file by using the attribute fixtures = ['appname/fixtures/my_fixture.json']
Pytest will show you the assertion output if the error occur. Django unittest does not. I have to put the breakpoint on my own and investigate the error.
Did you tried to change the --verbose param on python manage.py tests?
A few tips:
There is a package called pytest-django that will help you integrate and using django with pytest.
I think that if you use classes will you not need to use the factory = APIRequestFactory(), the test methods itself they have a parameter called client that is a interface to the python requests module to access your views.
import pytest
from model_mommy import mommy
#pytest.fixture()
def user(db):
return mommy.make(User)
class SiteAPIViewTestSuite:
def test_create_view(self, client, user):
assert Site.objects.count() == 0
post_data = {
'name': 'Stackoverflow'
'url': 'http://stackoverflow.com',
'user_id': user.id,
}
response = client.post(
reverse('sites:create'),
json.dumps(post_data),
content_type='application/json',
)
data = response.json()
assert response.status_code == 201
assert Site.objects.count() == 1
assert data == {
'count': 1,
'next': None,
'previous': None
'results': [{
'pk': 1,
'name': 'Stackoverflow',
'url': 'http://stackoverflow.com',
'user_id': user.id
}]
}

Mocking boto3 S3 client method Python

I'm trying to mock a singluar method from the boto3 s3 client object to throw an exception. But I need all other methods for this class to work as normal.
This is so I can test a singular Exception test when and error occurs performing a upload_part_copy
1st Attempt
import boto3
from mock import patch
with patch('botocore.client.S3.upload_part_copy', side_effect=Exception('Error Uploading')) as mock:
client = boto3.client('s3')
# Should return actual result
o = client.get_object(Bucket='my-bucket', Key='my-key')
# Should return mocked exception
e = client.upload_part_copy()
However this gives the following error:
ImportError: No module named S3
2nd Attempt
After looking at the botocore.client.py source code I found that it is doing something clever and the method upload_part_copy does not exist. I found that it seems to call BaseClient._make_api_call instead so I tried to mock that
import boto3
from mock import patch
with patch('botocore.client.BaseClient._make_api_call', side_effect=Exception('Error Uploading')) as mock:
client = boto3.client('s3')
# Should return actual result
o = client.get_object(Bucket='my-bucket', Key='my-key')
# Should return mocked exception
e = client.upload_part_copy()
This throws an exception... but on the get_object which I want to avoid.
Any ideas about how I can only throw the exception on the upload_part_copy method?
Botocore has a client stubber you can use for just this purpose: docs.
Here's an example of putting an error in:
import boto3
from botocore.stub import Stubber
client = boto3.client('s3')
stubber = Stubber(client)
stubber.add_client_error('upload_part_copy')
stubber.activate()
# Will raise a ClientError
client.upload_part_copy()
Here's an example of putting a normal response in. Additionally, the stubber can now be used in a context. It's important to note that the stubber will verify, so far as it is able, that your provided response matches what the service will actually return. This isn't perfect, but it will protect you from inserting total nonsense responses.
import boto3
from botocore.stub import Stubber
client = boto3.client('s3')
stubber = Stubber(client)
list_buckets_response = {
"Owner": {
"DisplayName": "name",
"ID": "EXAMPLE123"
},
"Buckets": [{
"CreationDate": "2016-05-25T16:55:48.000Z",
"Name": "foo"
}]
}
expected_params = {}
stubber.add_response('list_buckets', list_buckets_response, expected_params)
with stubber:
response = client.list_buckets()
assert response == list_buckets_response
As soon as I posted on here I managed to come up with a solution. Here it is hope it helps :)
import botocore
from botocore.exceptions import ClientError
from mock import patch
import boto3
orig = botocore.client.BaseClient._make_api_call
def mock_make_api_call(self, operation_name, kwarg):
if operation_name == 'UploadPartCopy':
parsed_response = {'Error': {'Code': '500', 'Message': 'Error Uploading'}}
raise ClientError(parsed_response, operation_name)
return orig(self, operation_name, kwarg)
with patch('botocore.client.BaseClient._make_api_call', new=mock_make_api_call):
client = boto3.client('s3')
# Should return actual result
o = client.get_object(Bucket='my-bucket', Key='my-key')
# Should return mocked exception
e = client.upload_part_copy()
Jordan Philips also posted a great solution using the the botocore.stub.Stubber class. Whilst a cleaner solution I was un-able to mock specific operations.
If you don't want to use either moto or the botocore stubber (the stubber does not prevent HTTP requests being made to AWS API endpoints it seems), you can use the more verbose unittest.mock way:
foo/bar.py
import boto3
def my_bar_function():
client = boto3.client('s3')
buckets = client.list_buckets()
...
bar_test.py
import unittest
from unittest import mock
class MyTest(unittest.TestCase):
#mock.patch('foo.bar.boto3.client')
def test_that_bar_works(self, mock_s3_client):
self.assertTrue(mock_s3_client.return_value.list_buckets.call_count == 1)
Here's an example of a simple python unittest that can be used to fake client = boto3.client('ec2') api call...
import boto3
class MyAWSModule():
def __init__(self):
client = boto3.client('ec2')
tags = client.describe_tags(DryRun=False)
class TestMyAWSModule(unittest.TestCase):
#mock.patch("boto3.client.describe_tags")
#mock.patch("boto3.client")
def test_open_file_with_existing_file(self, mock_boto_client, mock_describe_tags):
mock_describe_tags.return_value = mock_get_tags_response
my_aws_module = MyAWSModule()
mock_boto_client.assert_call_once('ec2')
mock_describe_tags.assert_call_once_with(DryRun=False)
mock_get_tags_response = {
'Tags': [
{
'ResourceId': 'string',
'ResourceType': 'customer-gateway',
'Key': 'string',
'Value': 'string'
},
],
'NextToken': 'string'
}
hopefully that helps.
What about simply using moto?
It comes with a very handy decorator:
from moto import mock_s3
#mock_s3
def test_my_model_save():
pass
I had to mock boto3 client for some integration testing and it was a bit painful! The problem that I had is that moto does not support KMS very well, yet I did not want to rewrite my own mock for the S3 buckets. So I created this morph of all of the answers. Also it works globally which is pretty cool!
I have it setup with 2 files.
First one is aws_mock.py. For the KMS mocking I got some predefined responses that came from live boto3 client.
from unittest.mock import MagicMock
import boto3
from moto import mock_s3
# `create_key` response
create_resp = { ... }
# `generate_data_key` response
generate_resp = { ... }
# `decrypt` response
decrypt_resp = { ... }
def client(*args, **kwargs):
if args[0] == 's3':
s3_mock = mock_s3()
s3_mock.start()
mock_client = boto3.client(*args, **kwargs)
else:
mock_client = boto3.client(*args, **kwargs)
if args[0] == 'kms':
mock_client.create_key = MagicMock(return_value=create_resp)
mock_client.generate_data_key = MagicMock(return_value=generate_resp)
mock_client.decrypt = MagicMock(return_value=decrypt_resp)
return mock_client
Second one is the actual test module. Let's call it test_my_module.py. I've omitted the code of my_module. As well as functions that are under the test. Let's call those foo, bar functions.
from unittest.mock import patch
import aws_mock
import my_module
#patch('my_module.boto3')
def test_my_module(boto3):
# Some prep work for the mock mode
boto3.client = aws_mock.client
conn = boto3.client('s3')
conn.create_bucket(Bucket='my-bucket')
# Actual testing
resp = my_module.foo()
assert(resp == 'Valid')
resp = my_module.bar()
assert(resp != 'Not Valid')
# Etc, etc, etc...
One more thing, not sure if that is fixed but I found out that moto was not happy unless you set some environmental variables like credentials and region. They don't have to be actual credentials but they do need to be set. There is a chance it might be fixed by the time you read this! But here is some code in case you do need it, shell code this time!
export AWS_ACCESS_KEY_ID='foo'
export AWS_SECRET_ACCESS_KEY='bar'
export AWS_DEFAULT_REGION='us-east-1'
I know it is probably not the prettiest piece of code but if you are looking for something universal it should work pretty well!
Here is my solution for patching a boto client used in the bowels of my project, with pytest fixtures. I'm only using 'mturk' in my project.
The trick for me was to create my own client, and then patch boto3.client with a function that returns that pre-created client.
#pytest.fixture(scope='session')
def patched_boto_client():
my_client = boto3.client('mturk')
def my_client_func(*args, **kwargs):
return my_client
with patch('bowels.of.project.other_module.boto3.client', my_client_func):
yield my_client_func
def test_create_hit(patched_boto_client):
client = patched_boto_client()
stubber = Stubber(client)
stubber.add_response('create_hit_type', {'my_response':'is_great'})
stubber.add_response('create_hit_with_hit_type', {'my_other_response':'is_greater'})
stubber.activate()
import bowels.of.project # this module imports `other_module`
bowels.of.project.create_hit_function_that_calls_a_function_in_other_module_which_invokes_boto3_dot_client_at_some_point()
I also define another fixture that sets up dummy aws creds so that boto doesn't accidentally pick up some other set of credentials on the system. I literally set 'foo' and 'bar' as my creds for testing -- that's not a redaction.
It's important that AWS_PROFILE env be unset because otherwise boto will go looking for that profile.
#pytest.fixture(scope='session')
def setup_env():
os.environ['AWS_ACCESS_KEY_ID'] = 'foo'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'bar'
os.environ.pop('AWS_PROFILE', None)
And then I specify setup_env as a pytest usefixtures entry so that it gets used for every test run.
I had a slightly different use case where the client is set up during a setup() method in a Class, as it does a few things such as listing things from the AWS service it's talking to (Connect, in my case). Lots of the above approaches weren't quite working, so here's my working version for future Googlers.
In order to get everything to work properly, I had to do this:
In the class under test (src/flow_manager.py):
class FlowManager:
client: botocore.client.BaseClient
def setup(self):
self.client = boto3.client('connect')
def set_instance(self):
response = self.client.list_instances()
... do stuff ....
In the test file (tests/unit/test_flow_manager.py):
#mock.patch('src.flow_manager.boto3.client')
def test_set_instance(self, mock_client):
expected = 'bar'
instance_list = {'alias': 'foo', 'id': 'bar'}
mock_client.list_instances.return_value = instance_list
actual = flow_manager.FlowManager("", "", "", "", 'foo')
actual.client = mock_client
actual.set_instance()
self.assertEqual(expected, actual.instance_id)
I've truncated the code to the relevant bits for this answer.

Categories

Resources