I am new to django unittest and pytest. However, I started to feel that pytest test case is more compact and clearer.
Here is my test cases:
class OrderEndpointTest(TestCase):
def setUp(self):
user = User.objects.create_superuser(username='admin', password='password', email='pencil#gmail.com')
mommy.make(CarData, _quantity=1)
mommy.make(UserProfile, _quantity=1, user=user)
def test_get_order(self):
mommy.make(Shop, _quantity=1)
mommy.make(Staff, _quantity=1, shop=Shop.objects.first())
mommy.make(Order, _quantity=1, car_info={"color": "Black"}, customer={"name": "Lord Elcolie"},
staff=Staff.objects.first(), shop=Shop.objects.first())
factory = APIRequestFactory()
user = User.objects.get(username='admin')
view = OrderViewSet.as_view({'get': 'list'})
request = factory.get('/api/orders/')
force_authenticate(request, user=user)
response = view(request)
assert 200 == response.status_code
assert 1 == len(response.data.get('results'))
And here is the pytest version
def test_get_order(car_data, admin_user, orders):
factory = APIRequestFactory()
user = User.objects.get(username='admin')
view = OrderViewSet.as_view({'get': 'list'})
request = factory.get('/api/orders/')
force_authenticate(request, user=user)
response = view(request)
assert 200 == response.status_code
assert 1 == len(response.data.get('results'))
The benefit from pytest is fixture in another file. It makes my test case compact by let them be my input parameters.
Are they any benefit of using Django unittest than pytest?
Update: 1July2017
Update: 5July2017
Update: 1Sep2017
Update: 29Sep2017
Update: 26Dec2017
Pytest reduces your problem when fixtures got mutated over the test.
I got testcases that run individually passed, but fail when run
thoroughly.
Pytest will show you the assertion output if the error occur. Django
unittest does not. I have to put the breakpoint on my own and
investigate the error.
Pytest allow you to use real database with simple decorator. Django
test does not. You have to create your own customized command for
your job
Pytest is generic. Being an generic it means you feel comfortable to
work with project outside the Django. For example when you have to
build micro-service such as Flask + 3rd parties like APScheduler,
PyRad, ... etc. I mention this because my backend life uses Django 50%
The rest of the is Python and infra
Pytest is not using multiple inheritance to create my fixtures
Unittest takes advantage on gitlab-ci over Pytest when used with Docker as a runner by smoothly execute without any extra configurations. problem
i've used Django test for my entire life and now i am using Py.test. I agree that pytest is much cleaner than django itself.
The benefit from pytest is fixture in another file. It makes my test case compact by let them be my input parameters.
In Django unittest you can still use fixtures in other file by using the attribute fixtures = ['appname/fixtures/my_fixture.json']
Pytest will show you the assertion output if the error occur. Django unittest does not. I have to put the breakpoint on my own and investigate the error.
Did you tried to change the --verbose param on python manage.py tests?
A few tips:
There is a package called pytest-django that will help you integrate and using django with pytest.
I think that if you use classes will you not need to use the factory = APIRequestFactory(), the test methods itself they have a parameter called client that is a interface to the python requests module to access your views.
import pytest
from model_mommy import mommy
#pytest.fixture()
def user(db):
return mommy.make(User)
class SiteAPIViewTestSuite:
def test_create_view(self, client, user):
assert Site.objects.count() == 0
post_data = {
'name': 'Stackoverflow'
'url': 'http://stackoverflow.com',
'user_id': user.id,
}
response = client.post(
reverse('sites:create'),
json.dumps(post_data),
content_type='application/json',
)
data = response.json()
assert response.status_code == 201
assert Site.objects.count() == 1
assert data == {
'count': 1,
'next': None,
'previous': None
'results': [{
'pk': 1,
'name': 'Stackoverflow',
'url': 'http://stackoverflow.com',
'user_id': user.id
}]
}
Related
I am working on writing unittest for my fastapi project.
One endpoint includes getting a serviceNow ticket. Here is the code i want to test:
from aiosnow.models.table.declared import IncidentModel as Incident
from fastapi import APIRouter
router = APIRouter()
#router.post("/get_ticket")
async def snow_get_ticket(req: DialogflowRequest):
"""Retrieves the status of the ticket in the parameter."""
client = create_snow_client(
SNOW_TEST_CONFIG.servicenow_url, SNOW_TEST_CONFIG.user, SNOW_TEST_CONFIG.pwd
)
params: dict = req.sessionInfo["parameters"]
ticket_num = params["ticket_num"]
try:
async with Incident(client, table_name="incident") as incident:
response = await incident.get_one(Incident.number == ticket_num)
stage_value = response.data["state"].value
desc = response.data["description"]
[...data manipulation, unimportant parts]
What i am having trouble with is trying to mock the client response, every time the actual client gets invoked and it makes the API call which i dont want.
Here is the current version of my unittest:
from fastapi.testclient import TestClient
client = TestClient(app)
#patch("aiosnow.models.table.declared.IncidentModel")
def test_get_ticket_endpoint_valid_ticket_num(self, mock_client):
mock_client.return_value = {"data" : {"state": "new",
"description": "test"}}
response = client.post(
"/snow/get_ticket", json=json.load(self.test_request)
)
assert response.status_code == 200
I think my problem is patching the wrong object, but i am not sure what else to patch.
In your test your calling client.post(...) if you don't want this to go to the Service Now API this client should be mocked.
Edit 1:
Okay so the way your test is setup now the self arg is the mocked IncidentModel object. So only this object will be a mock. Since you are creating a brand new IncidentModel object in your post method it is a real IncidentModel object, hence why its actually calling the api.
In order to mock the IncidentModel.get_one method so that it will return your mock value any time an object calls it you want to do something like this:
def test_get_ticket_endpoint_valid_ticket_num(mock_client):
mock_client.return_value = {"data" : {"state": "new",
"description": "test"}}
with patch.object(aiosnow.models.table.declared.IncidentModel, "get_one", return_value=mock_client):
response = client.post(
"/snow/get_ticket", json=json.load(self.test_request)
)
assert response.status_code == 200
The way variable assignment works in python, changing aiosnow.models.table.declared.IncidentModel will not change the IncidentModel that you've imported into your python file. You have to do the mocking where you use the object.
So instead of #patch("aiosnow.models.table.declared.IncidentModel"), you want to do #patch("your_python_file.IncidentModel")
I'm trying to run a pytest which uses the following function:
def storage_class(request):
def fin():
sc.delete()
request.addfinalizer(fin)
logger.info("Creating storage")
data = {'api_version': 'v1', 'kind': 'namespace'}
# data is ususally loaded from yaml template
sc = OCS(**data)
return sc
I cannot find in the project any fixture named "request" so I assume it's a built-in fixture. I have however searched it in the docs, but I cannot find a "request" build-in fixture: https://docs.pytest.org/en/latest/builtin.html
Anybody can shed some light on this (builtin?) fixture?
Thanks!
request fixture helps to get information about the context.
more on request fixture.
Example for request fixture.
The most common usage from request fixture is addfinalizer and config
And if you only need a teardown functionality, you can simply use a yield and get rid of the request fixture.
#pytest.fixture()
def storage_class():
logger.info("Creating storage")
data = {'api_version': 'v1', 'kind': 'namespace'}
sc = OCS(**data)
yield sc
# Any code after yield will give you teardown effect
sc.delete()
I am writing System Tests for my Django app, where I test the complete application via HTTP requests and mock its external dependencies' APIs.
In views.py I have something like:
from external_service import ExternalService
externalService = ExternalService
data = externalService.get_data()
#crsf_exempt
def endpoint(request):
do_something()
What I want is to mock (or stub) ExternalService to return a predefined response when its method get_data() is called.
The problem is that when I run python manage.py test, views.py is loaded before my test class. So when I patch the object with a mocked one, the function get_data() was already called.
This solution didn't work either.
First off, don't call your method at import time. That can't be necessary, surely?
If get_data does something like a get request, e.g.
def get_data():
response = requests.get(DATA_URL)
if response.ok:
return response
else:
return None
Then you can mock it;
from unittest.mock import Mock, patch
from nose.tools import assert_is_none, assert_list_equal
from external_service import ExternalService
#patch('external_service.requests.get')
def test_getting_data(mock_get):
data = [{
'content': 'Response data'
}]
mock_get.return_value = Mock(ok=True)
mock_get.return_value.json.return_value = data
response = ExternalService.get_data()
assert_list_equal(response.json(), data)
#patch('external_service.requests.get')
def test_getting_data_error(mock_get):
mock_get.return_value.ok = False
response = ExternalService.get_data()
assert_is_none(response)
For this you'll need pip install nose if you don't already have it.
I am writing an api in Flask. I have couple of views that return json responses and I have written some unit tests that will check if those views are working properly and are returning correct data. Then I turned on coverage plugin for nosetests (in my case nose-cov).
And here's where my problem starts, coverage is not seeing my views as being executed by tests.
First some base code to give you full picture:
My view:
def get_user(uid):
"""Retrieve user.
Args:
uid (url): valid uid value
Usage: ::
GET user/<uid>/
Returns:
obj:
::
{
'data': {
`response.User`,
},
'success': True,
'status': 'get'
}
"""
if not uid:
raise exception.ValueError("Uid is empty")
obj = db_layer.user.get_user(uid=value)
return {
'data': {
obj.to_dict(), # to_dict is helper method that converts part of orm into dict
},
'success': True,
'status': 'get'
}
My test:
class TestUserViews(base.TestViewsBase):
def test_get_user(self):
uid = 'some_uid_from_fixtures'
name = 'some_name_from_fixtures'
response = self.get(u'user/uid/{}/'.format(uid))
self.assertEqual(response.status_code, 200)
user_data = json.loads(response.text)['data']
self.assertEqual(name, user_data['username'])
self.assertEqual(uid, user_data['uid'])
def get(self, method, headers=None):
"""
Wrapper around requests.get, reassures that authentication is
sorted for us.
"""
kwargs = {
'headers': self._build_headers(headers),
}
return requests.get(self.get_url(method), **kwargs)
def get_url(self, method):
return '{}/{}/{}'.format(self.domain, self.version, method)
def _build_headers(self, headers=None):
if headers is None:
headers = {}
headers.update({
'X-Auth-Token': 'some-token',
'X-Auth-Token-Test-User-Id': 'some-uid',
})
return headers
To run test suite I have special shell script that performs few actions for me:
#!/usr/bin/env bash
HOST="0.0.0.0"
PORT="5001"
ENVS="PYTHONPATH=$PYTHONPATH:$PWD"
# start server
START_SERVER="$ENVS python $PWD/server.py --port=$PORT --host=$HOST"
eval "$START_SERVER&"
PID=$!
eval "$ENVS nosetests -s --nologcapture --cov-report html --with-cov"
kill -9 $PID
After that view is reported as not being executed.
Ok guys, 12h later I found a solution. I have checked flask, werkzeug, requests, subprocess and thread lib. To only learn that problem is somewhere else. Solution was easy in fact. The bit of code that has to modified is execution of server.py. We need to cover it as well and then merge results generated by server.py and generated by nosetests. Modified test-runner.sh looks as follows:
#!/usr/bin/env bash
HOST="0.0.0.0"
PORT="5001"
ENVS="COVERAGE_PROCESS_START=$PWD/.apirc PYTHONPATH=$PYTHONPATH:$PWD"
START_SERVER="$ENVS coverage run --rcfile=.apirc $PWD/server.py --port=$PORT --host=$HOST"
eval "$START_SERVER&"
eval "$ENVS nosetests -s --nologcapture --cov-config=.apirc --cov-report html --with-cov"
# this is important bit, we have to stop flask server gracefully otherwise
# coverage won't get a chance to collect and save all results
eval "curl -X POST http://$HOST:$PORT/0.0/shutdown/"
# this will merge results from both coverage runs
coverage combine --rcfile=.apirc
coverage html --rcfile=.apirc
Where .apirc in my case is looks as follows:
[run]
branch = True
parallel = True
source = files_to_cover/
[html]
directory = cover
Last thing we need to do is to build into our flask, view that will allow us to greacefully shut down the server. Previously I was brute killing it with kill -9 and it used to kill not only server but coverage as well.
Follow this snippet: http://flask.pocoo.org/snippets/67/
And my view is like that:
def shutdown():
if config.SHUTDOWN_ALLOWED:
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
return 'Server shutting down...'
It is important to use nose-cov instead of standard coverage plugin, as it uses rcfile and allows to configure more. In our case parallel is the key, please note data_files variable is not working for nose-coverage, so you cannot override it in .apirc and you have to use default values.
After all of that, your coverage will be all but shining with valid values.
I hope it going to be helpful for someone out there.
I am writing test cases for my CRUD flask app https://github.com/Leo-g/Flask-Skeleton/blob/master/tests.py
I want to ensure that the update and delete tests run only if the add tests succeeds.
def test_add(self):
rv = self.app.post('/users/add', data=dict(name = 'test name', email = 'test#email.com'), follow_redirects=True)
assert 'Add was successful' in rv.data.decode('utf-8')
def test_Update(self):
with app.app_context():
id = Users.query.first().id
rv = self.app.post('/users/update/{}'.format(id), data=dict(name = 'test name update', email = 'test#email.update'), follow_redirects=True)
assert 'Update was successful' in rv.data.decode('utf-8')
While reading the docs I see that this can be done via the unittest.skip decorator but I am not sure How I can implement it.
You can use the --failfast command-line option when running the tests to make the test suite stop after a single test fails. For example, if you normally run the tests with the command python -m unittest you can change it to python -m unittest --failfast.