Django and tests in docfiles - python

I'm having a small problem with my test suite with Django.
I'm working on a Python package that can run in both Django and Plone (http://pypi.python.org/pypi/jquery.pyproxy).
All the tests are written as doctests, either in the Python code or in separate docfiles (for example the README.txt).
I can have those tests running fine but Django just do not count them:
[vincent ~/buildouts/tests/django_pyproxy]> bin/django test pyproxy
...
Creating test database for alias 'default'...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
But if I had some failing test, it will appear correctly:
[vincent ~/buildouts/tests/django_pyproxy]> bin/django test pyproxy
...
Failed example:
1+1
Expected nothing
Got:
2
**********************************************************************
1 items had failures:
1 of 44 in README.rst
***Test Failed*** 1 failures.
Creating test database for alias 'default'...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
This is how my test suite is declared right now:
import os
import doctest
from unittest import TestSuite
from jquery.pyproxy import base, utils
OPTIONFLAGS = (doctest.ELLIPSIS |
doctest.NORMALIZE_WHITESPACE)
__test__ = {
'base': doctest.testmod(
m=base,
optionflags=OPTIONFLAGS),
'utils': doctest.testmod(
m=utils,
optionflags=OPTIONFLAGS),
'readme': doctest.testfile(
"../../../README.rst",
optionflags=OPTIONFLAGS),
'django': doctest.testfile(
"django.txt",
optionflags=OPTIONFLAGS),
}
I guess I'm doing something wrong when declaring the test suite but I don't have a clue what it is exactly.
Thanks for your help,
Vincent

I finally solved the problem with the suite() method:
import os
import doctest
from django.utils import unittest
from jquery.pyproxy import base, utils
OPTIONFLAGS = (doctest.ELLIPSIS |
doctest.NORMALIZE_WHITESPACE)
testmods = {'base': base,
'utils': utils}
testfiles = {'readme': '../../../README.rst',
'django': 'django.txt'}
def suite():
return unittest.TestSuite(
[doctest.DocTestSuite(mod, optionflags = OPTIONFLAGS)
for mod in testmods.values()] + \
[doctest.DocFileSuite(f, optionflags = OPTIONFLAGS)
for f in testfiles.values()])
Apparently the problem when calling doctest.testfile or doctest.testmod is that the tests are directly ran.
Using DocTestSuite/DocFileSuite builds the list and then the test runner runs them.

Related

Change XDG_DATA_HOME environment variable in pytest

I have some trouble with changing the environment variables with tmp_path in a project, so I tried to write a sample project to debug it. That doesn't work and don't understand why. The project uses a settings.py file to define some constants. module.py import this constants and do his stuff.
src
settings.py
import os
from pathlib import Path
XDG_HOME = Path(os.environ.get("XDG_DATA_HOME"))
HOME = XDG_HOME / "home"
module.py
from xdg_and_pytest.settings import HOME
def return_home(default=HOME):
return default
tests
In my tests, I have a fixture to change the environment variable. The first test to call it put tmp_dir in the $XDG_DATA_HOME environment variable but the second one get the same path ...
conftest.py
import pytest
#pytest.fixture
def new_home(tmp_path, monkeypatch):
monkeypatch.setenv("XDG_DATA_HOME", str(tmp_path))
return new_home
test_module.py
def test_new_home_first(new_home):
from xdg_and_pytest.module import return_home
assert "new_home_first" in str(return_home())
def test_new_home_second(new_home):
from xdg_and_pytest.module import return_home
assert "new_home_second" in str(return_home())
command-line result
poetry run pytest
====================== test session starts ======================
platform linux -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
collected 2 items
tests/test_module.py .F [100%]
=========================== FAILURES ============================
_____________________ test_new_home_second ______________________
new_home = <function new_home at 0x7f75991b13f0>
def test_new_home_second(new_home):
from xdg_and_pytest.module import return_home
> assert "new_home_second" in str(return_home())
E AssertionError: assert 'new_home_second' in '/tmp/pytest-of-bisam/pytest-92/test_new_home_first0/home'
E + where '/tmp/pytest-of-bisam/pytest-92/test_new_home_first0/home' = str(PosixPath('/tmp/pytest-of-bisam/pytest-92/test_new_home_first0/home'))
E + where PosixPath('/tmp/pytest-of-bisam/pytest-92/test_new_home_first0/home') = <function return_home at 0x7f759920bac0>()
tests/test_module.py:10: AssertionError
==================== short test summary info ====================
FAILED tests/test_module.py::test_new_home_second - AssertionE...
================== 1 failed, 1 passed in 0.09s ==================
This is the clearest code I got, but I tried lots of different monkeypatching ways. Maybe should I left the idea of a settings.py file and try something else ? I don't want to use a scope=session solution because I want to try different kind of data in $XDG_DATA_HOME.

pytest fixtures: testing pandas dataframe

I have some scripts in package directory and some tests in tests directory, along with a CSV file containing a dataframe that i want to use for testing purposes.
main_directory/
|
|- package/
| |- foo.py
| |- bar.py
|
|- tests/
|- conftest.py
|- test1.py
|- test.csv
I am using pytest and i have defined a conftest.py that contains a fixture that i want to use for the whole test session, that should return a pandas test dataframe imported from a csv file, as in the following:
#conftest.py
import pytest
from pandas import read_csv
path="test.csv"
#pytest.fixture(scope="session")
def test_data():
return read_csv(path)
I have been trying to use the fixture to return the test dataframe for the test_functions.
The original test functions were a bit more complex, calling pandas groupby on the object returned by the fixture. I kept on getting the error 'TestStrataFrame' object has no attribute 'groupby' so i simplified the test to the test below and, as I was still getting errors, I realized that i am probably missing something.
My test is the following:
#test1.py
import unittest
import pytest
class TestStrataFrame(unittest.TestCase):
def test_fixture(test_data):
assert isinstance(test_data,pd.DataFrame) is True
The above test_fixture returns:
=============================================== FAILURES ================================================
_____________________________________ TestStrataFrame.test_fixture ______________________________________
test_data = <tests.test_data.TestStrataFrame testMethod=test_fixture>
def test_fixture(test_data):
ciao=test_data
> assert isinstance(ciao,pd.DataFrame) is True
E AssertionError: assert False is True
E + where False = isinstance(<tests.test_data.TestStrataFrame testMethod=test_fixture>, <class 'pandas.core.frame.DataFrame'>)
E + where <class 'pandas.core.frame.DataFrame'> = pd.DataFrame
tests/test_data.py:23: AssertionError
=========================================== warnings summary ============================================
../../../../../opt/miniconda3/envs/geo/lib/python3.7/importlib/_bootstrap.py:219
/opt/miniconda3/envs/geo/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
return f(*args, **kwds)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
======================================== short test summary info ========================================
FAILED tests/test_data.py::TestStrataFrame::test_fixture - AssertionError: assert False is True
================================ 1 failed, 4 passed, 1 warning in 12.82s ================================
How can i do this correctly?
PS : At the moment i would not focus on the RuntimeWarning. I am getting since after I have started trying to solve this issue, but i am quite sure the tests were failing even before I got that warning - so they are probably unrelated. I reinstalled the environment and the warning persists, hopefully might go away with solving the issue...
this works for me:
isinstance(type(my_pd_df),type(pandas.core.frame.DataFrame) )
The error is coming as test_data is not been passed to test_fixture method. for example below are two ways you can tweak your Class and its method.
import unittest
import pytest
import pandas as pd
class TestStrataFrame(unittest.TestCase):
test_data=pd.DataFrame()
def test_fixture(self):
test_data=pd.DataFrame()
assert isinstance(test_data,pd.DataFrame) is True
def test_fixture_1(self):
assert isinstance(TestStrataFrame.test_data,pd.DataFrame) is True
and run from terminal : pytest test_sample.py
This is the expected behavior if you take note of this page here. That page clearly states:
The following pytest features do not work, and probably never will due to different design philosophies:
1. Fixtures (except for autouse fixtures, see below);
2. Parametrization;
3. Custom hooks;
You can modify your code to the following to work.
# conftest.py
from pathlib import Path
import pytest
from pandas import read_csv
CWD = Path(__file__).resolve()
FIN = CWD.parent / "test.csv"
#pytest.fixture(scope="class")
def test_data(request):
request.cls.test_data = read_csv(FIN)
# test_file.py
import unittest
import pytest
import pandas as pd
#pytest.mark.usefixtures("test_data")
class TestStrataFrame(unittest.TestCase):
def test_fixture(self):
assert hasattr(self, "test_data")
assert isinstance(self.test_data, pd.DataFrame)
==>pytest tests/
============================= test session starts ==============================
platform darwin -- Python 3.9.1, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /Users/***/Desktop/scripts/stackoverflow
collected 1 item
tests/test_file.py . [100%]
============================== 1 passed in 0.03s ===============================
You can see more about mixing fixtures with the unittest framework here.

How can I improve code coverage of Python3

With unittest and Coverage.py,
def add_one(num: int):
num = num + 1
return num
from unittest import TestCase
from add_one import add_one
class TestAddOne(TestCase):
def test_add_one(self):
self.assertEqual(add_one(0), 1)
self.assertNotEqual(add_one(0), 2)
and here is the coverage:
How can I test the whole file?
Assuming that your test file is called test_one.py run this command in the same directory:
coverage run -m unittest test_one.py && coverage report
Result should look similar to this:
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Name Stmts Miss Cover
---------------------------------
add_one.py 3 0 100%
test_one.py 6 0 100%
---------------------------------
TOTAL 9 0 100%
You never call the test_add_one method.
Note how the function definition is executed, but not the body. To run your test, add a __main__ check and a TestSuite/TextTestRunner (https://docs.python.org/3/library/unittest.html)
from unittest import TestCase, TestSuite, TextTestRunner
from add_one import add_one
class TestAddOne(TestCase):
def test_add_one(self):
self.assertEqual(add_one(0), 1)
self.assertNotEqual(add_one(0), 2)
if __name__ == "__main__":
suite = TestSuite()
suite.addTest(TestAddOne("test_add_one"))
TextTestRunner().run(suite)
The result of
coverage run <file.py>
coverage html
# OR
coverage report -m
is all lines tested.

How to mock.patch library function from within unit test

I have a module called learning that uses random.uniform(). I have a file called test_learning.py containing unit tests. When I run a unit test, I would like the code in learning to see the patched version of random.uniform(). How can I do this? Here is what I have currently.
import random
import unittest
import unittest.mock as mock
class TestLearning(unittest.TestCase):
def test_get_random_belief_bit(self):
with mock.patch('learning.random.uniform', mock_uniform):
bit = learning.get_random_belief_bit(0.4)
self.assertEqual(bit, 0)
But the test (sometimes) fails because learning.get_random_belief_bit() seems to be using the real random.uniform().
Unit test solution:
learning.py:
import random
def get_random_belief_bit(f):
return random.uniform()
test_learning.py:
import random
import unittest
import unittest.mock as mock
import learning
class TestLearning(unittest.TestCase):
def test_get_random_belief_bit(self):
with mock.patch('random.uniform', mock.Mock()) as mock_uniform:
mock_uniform.return_value = 0
bit = learning.get_random_belief_bit(0.4)
self.assertEqual(bit, 0)
mock_uniform.assert_called_once()
if __name__ == '__main__':
unittest.main()
unit test result with coverage report:
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Name Stmts Miss Cover Missing
---------------------------------------------------------------------------
src/stackoverflow/57874971/learning.py 3 0 100%
src/stackoverflow/57874971/test_learning.py 13 0 100%
---------------------------------------------------------------------------
TOTAL 16 0 100%

Why are my App tests not being recognized by Django tests? [duplicate]

This question already has an answer here:
How to split Django app tests across several files
(1 answer)
Closed 9 years ago.
Background:
I have the following django project setup:
>TopLevel:
> - App1:
> * models.py
> * forms.py
> * views.py
> * __init__.py
> * Tests/
> * __init__.py
> * test_simple.py
Here is the code in test_simple.py:
from django.test import TestCase
class SimpleTest(TestCase):
def test_basic_addition(self):
"""
Tests that 1 + 1 always equals 2.
"""
self.assertEqual(1 + 1, 2)
Now, when I run:
> python manage.py test app1
I get the following output:
>Creating test database for alias 'default'...
>
>----------------------------------------------------------------------
>Ran 0 tests in 0.000s
>
>OK
>Destroying test database for alias 'default'...
But, if I instead use the following project structure:
>TopLevel:
> - App1:
> * models.py
> * forms.py
> * views.py
> * __init__.py
> * tests.py
Where tests.py has the following code:
from django.test import TestCase
class SimpleTest(TestCase):
def test_basic_addition(self):
"""
Tests that 1 + 1 always equals 2.
"""
self.assertEqual(1 + 1, 2)
Now, when I run:
> python manage.py test app1
I get:
>Creating test database for alias 'default'...
>.
>----------------------------------------------------------------------
>Ran 1 test in 0.002s
>
>OK
>Destroying test database for alias 'default'...
Question:
Why does Django not recognize my Tests directory && why won't any tests listed inside Tests/ be picked up by Django's unittest structure to run?
One option to have a good sleep and don't even think about test discovery is to use nose. It has a lot features, one of it is automatic tests discovery.
There is package called django_nose that will help you to integrate your django project with nose:
Features
All the goodness of nose in your Django tests, like...
...
Obviating the need to import all your tests into tests/__init__.py. This not only saves busy-work but also eliminates the possibility of accidentally shadowing test classes.
...
Hope that helps.
You will require to change your Tests to tests and import every test to tests/__init__.py up until django 1.5 AFAIK. Also there is a test runner which will work the way unittest2 discovery work. This functionality has been integrated into django1.6.
Have a look at: running tests in Django
Test discovery is based on the unittest module’s built-in test discovery. By default, this will discover tests in any file named “test*.py” under the current working directory.

Categories

Resources