I have a whole bunch of documentation for my Python package written using standard Sphinx .rst files. I also have tests for my package, among which I'd like to include a test for whether the documentation will compile properly into the expected output. Basically, I want to catch cases when I've used a link to nowhere, or have a poorly-formed header etc.
Now I know that I can always write a test that calls make html and tests for the exit code, but this feels really dirty, so I'm thinking that there must be a better way. Anyone know what it is?
You could create a unit test for your documentation in the same way as you create for your code. To catch warnings you can set warningiserror=True in Sphinx config:
from django.utils import unittest
from sphinx.application import Sphinx
class DocTest(unittest.TestCase):
source_dir = u'docs/source'
config_dir = u'docs/source'
output_dir = u'docs/build'
doctree_dir = u'docs/build/doctrees'
all_files = 1
def test_html_documentation(self):
app = Sphinx(self.source_dir,
self.config_dir,
self.output_dir,
self.doctree_dir,
buildername='html',
warningiserror=True,
)
app.build(force_all=self.all_files)
# TODO: additional checks here if needed
def test_text_documentation(self):
# The same, but with different buildername
app = Sphinx(self.source_dir,
self.config_dir,
self.output_dir,
self.doctree_dir,
buildername='text',
warningiserror=True,
)
app.build(force_all=self.all_files)
# TODO: additional checks if needed
def tearDown(self):
# TODO: clean up the output directory
pass
Related
I am trying to upgrade a 10 year old event listener that I didn't write from Python 2.7 to python 3.7. The basic issue I'm running into is the way the original script was importing its plugins. The idea behind the original script was that any python file put into a "plugins" folder, with a "registerCallbacks" function inside it would auto-load itself into the event listener and run. It's been working great for lots of studios for years, but Python 3.7 is not liking it at all.
The folder structure for the original code is as follows:
EventListenerPackage
src
event_listener.py
plugins
plugin_1.py
plugin_2.py
From this, you can see that both the event listener and the plugins are held in folders that are parallel to each other, not nested.
The original code read like this:
# Python 2.7 implementation
import imp
class Plugin(object):
def __init__(self, path):
self._path = 'c:/full/path/to/EventListenerPackage/plugins/plugin_1.py'
self._pluginName = 'plugin_1'
def load(self):
try:
plugin = imp.load_source(self._pluginName, self._path)
except:
self._active = False
self.logger.error('Could not load the plugin at %s.\n\n%s', self._path, traceback.format_exc())
return
regFunc = getattr(plugin, 'registerCallbacks', None)
Due to the nature of the changes (as I understand them) in the way that Python 3 imports modules, none of the other message boards seem to be getting me to the answer.
I have tried several different approaches, the best so far being:
How to import a module given the full path?
I've tried several different methods, including adding the full path to the sys.path, but I always get "ModuleNotFoundError".
Here is roughly where I'm at now.
import importlib.util
import importlib.abc
import importlib
class Plugin(object):
def __init__(self, path):
self._path = 'c:/full/path/to/EventListenerPackage/plugins/plugin_1.py'
self._pluginName = 'plugin_1'
def load(self):
try:
spec = importlib.util.spec_from_file_location('plugins.%s' % self._pluginName, self._path)
plugin = importlib.util.module_from_spec(spec)
# OR I HAVE ALSO TRIED
plugin = importlib.import_module(self._path)
except:
self._active = False
self.logger.error('Could not load the plugin at %s.\n\n%s', self._path, traceback.format_exc())
return
regFunc = getattr(plugin, 'registerCallbacks', None)
Does anyone have any insights into how I can actually import these modules with the given folder structure?
Thanks in advance.
You're treating plugins like it's a package. It's not. It's just a folder you happen to have your plugin source code in.
You need to stop putting plugins. in front of the module name argument in spec_from_file_location:
spec = importlib.util.spec_from_file_location(self._pluginName, self._path)
Aside from that, you're also missing the part that actually executes the module's code:
spec.loader.exec_module(plugin)
Depending on how you want your plugin system to interact with regular modules, you could alternatively just stick the plugin directory onto the import path:
sys.path.append(plugin_directory)
and then import your plugins with import or importlib.import_module. Probably importlib.import_module, since it sounds like the plugin loader won't know plugin names in advance:
plugin = importlib.import_module(plugin_name)
If you do this, plugins will be treated as ordinary modules, with consequences like not being able to safely pick a plugin name that collides with an installed module.
As an entirely separate issue, it's pretty weird that your Plugin class completely ignores its path argument.
This is probably a relatively simple question, but I struggle to find an answer otherwise.
I am working on a small Python project, and I would like to do this in a test driven way. File operations in a defined folder in the User's home directory are quite essential to the programme, but I would like these to happen in a separate temporary folder for the tests.
Is there a way to set a variable in my app that is different if the app realises it's run as part of the testing environment? My current workarounds (def some_function(self, test=False) or to include lots of #patch decorators) are not very elegant...
I'm thinking of something along the lines of:
def return_home_folder():
if testing:
home = os.getcwd() + '/testrun'
else:
home = os.path.expanduser('~')
returnvalue = home + '/appname/'
return returnvalue
IMHO it is not a good idea to have a function behave differently in test vs. production. The whole point of tests is to foretell how the program will behave in production, and changing the behaviour kinda defeats that.
(Unit Testing is different from a "dry run", but that's a separate issue.)
I'ld go for something like this:
def get_app_folder():
return os.path.join(os.path.expanduser("~"), "appname")
def test_get_app_folder():
assert get_app_folder().startswith(os.path.expanduser("~"))
assert get_app_folder().endswith("appname")
The unit tests themselves aren't overly instructive, but they show how you can work around the need for a "testing mode" altogether.
You could define your environment in an environment variable:
$ export MY_APP_ENVIRONMENT=test
Read it in a settings.py module:
import os
ENVIRONMENT = os.environ['MY_APP_ENVIRONMENT']
_base_dir_map = {
'test': os.path.join(os.getcwd(), 'testrun'),
'prod': os.path.expanduser('~'),
}
HOME_DIR = os.path.join(_base_dir_map[ENVIRONMENT], 'appname')
Then, everywhere (tests and app code), you would use settings.HOME_DIR:
import os
from my_app import settings
file_path = os.path.join(settings.HOME_DIR, 'filename')
...
Hope this works for you or gets you on a track to something that does.
In pytest, my testing script compares the calculated results with baseline results which are loaded via
SCRIPTLOC = os.path.dirname(__file__)
TESTBASELINE = os.path.join(SCRIPTLOC, 'baseline', 'baseline.csv')
baseline = pandas.DataFrame.from_csv(TESTBASELINE)
Is there a non-boilerplate way to tell pytest to start looking from the root directory of the script rather than get the absolute location through SCRIPTLOC?
If you are simply looking for the pytest equivalent of using __file__, you can add the request fixture to your test and use request.fspath
From the docs:
class FixtureRequest
...
fspath
the file system path of the test module which collected this test.
So an example might look like:
def test_script_loc(request):
baseline = os.path.join(request.fspath.dirname, 'baseline', 'baseline.cvs')
print(baseline)
If you're wanting to avoid boilerplate though, you're not going to gain much from doing this (assuming I understand what you mean by 'non-boilerplate')
Personally, I think using the fixture is more explicit (within pytest idioms), but I prefer to wrap the request manipulation in another fixture so I know that I'm specifically grabbing sample test data just by looking at the method signature of a test.
Here's a snippet I use (modified to match your question, I use a subdirectory hierarchy):
# in conftest.py
import pytest
#pytest.fixture(scope="module")
def script_loc(request):
'''Return the directory of the currently running test script'''
# uses .join instead of .dirname so we get a LocalPath object instead of
# a string. LocalPath.join calls normpath for us when joining the path
return request.fspath.join('..')
And sample usage
def test_script_loc(script_loc):
baseline = script_loc.join('baseline/baseline.cvs')
print(baseline)
Update: pathlib
Given that I've long since stopped supporting legacy Python and instead use pathlib, I no longer return LocalPath objects nor depend on their API.
The pytest team is also planning to eventually deprecate py.path and port their internals to the standard library pathlib. This started as early as pytest 3.9.0 with the introduction of tmp_path, though the actual removal of LocalPath attributes may not happen for some time.
Though the pytest team will likely add an alternative attribute (e.g. request.fs_path) that returns a Path object, it's easy enough to convert the LocalPath to a Path ourselves for now.
Here's a variant of the above example using a Path object, tweak as it suits you:
# in conftest.py
import pytest
from pathlib import Path
#pytest.fixture(scope="module")
def script_loc(request):
'''Return the directory of the currently running test script'''
return Path(request.fspath).parent
And sample usage
def test_script_loc(script_loc):
baseline = script_loc.joinpath('baseline/baseline.cvs')
print(baseline)
I am writing functional tests using pytest for a software that can run locally and in the cloud. I want to create 2 modules, each with the same module/fixture names, and have pytest load one or the other depending if I'm running tests locally or in the cloud:
/fixtures
/fixtures/__init__.py
/fixtures/local_hybrids
/fixtures/local_hybrids/__init__.py
/fixtures/local_hybrids/foo.py
/fixtures/cloud_hybrids
/fixtures/cloud_hybrids/__init__.py
/fixtures/cloud_hybrids/foo.py
/test_hybrids/test_hybrids.py
foo.py (both of them):
import pytest
#pytest.fixture()
def my_fixture():
return True
/fixtures/__init__.py:
if True:
import local_hybrids as hybrids
else:
import cloud_hybrids as hybrids
/test_hybrids/test_hybrids.py:
from fixtures.hybrids.foo import my_fixture
def test_hybrid(my_fixture):
assert my_fixture
The last code block doesn't work of course, because import fixtures.hybrids is looking at the file system instead of __init__.py's "fake" namespace, which isn't like from fixtures import hybrids, which works (but then you cannot use the fixtures as the names would involve dot notation).
I realize that I could play with pytest_generate_test to alter the fixture dynamically (maybe?) but I'd really hate managing each fixture manually from within that function... I was hoping the dynamic import (if x, import this, else import that) was standard Python, unfortunately it clashes with the fixtures mechanism:
import fixtures
def test(fixtures.hybrids.my_fixture): # of course it doesn't work :)
...
I could also import each fixture function one after the other in init; more legwork, but still a viable option to fool pytest and get fixture names without dots.
Show me the black magic. :) Can it be done?
I think in your case it's better to define a fixture - environment or other nice name.
This fixture can be just a getter from os.environ['KEY'] or you can add custom command line argument like here
then use it like here
and the final use is here.
What im trying to tell is that you need to switch thinking into dependency injection: everything should be a fixture. In your case (and in my plugin as well), runtime environment should be a fixture, which is checked in all other fixtures which depend on the environment.
You might be missing something here: If you want to re-use those fixtures you need to say it explicitly:
from fixtures.hybrids.foo import my_fixture
#pytest.mark.usefixtures('my_fixture')
def test_hybrid(my_fixture):
assert my_fixture
In that case you could tweak pytest as following:
from local_hybrids import local_hybrids_fixture
from cloud_hybrids import cloud_hybrids_fixture
fixtures_to_test = {
"local":None,
"cloud":None
}
#pytest.mark.usefixtures("local_hybrids_fixture")
def test_add_local_fixture(local_hybrids_fixture):
fixtures_to_test["local"] = local_hybrids_fixture
#pytest.mark.usefixtures("cloud_hybrids_fixture")
def test_add_local_fixture(cloud_hybrids_fixture):
fixtures_to_test["cloud"] = cloud_hybrids_fixture
def test_on_fixtures():
if cloud_enabled:
fixture = fixtures_to_test["cloud"]
else:
fixture = fixtures_to_test["local"]
...
If there are better solutions around I am also interested ;)
I don't really think there is a "good way" of doing that in python, but still it is possible with a little amount of hacking. You can update sys.path for the subfolder with fixtures you would like to use and import fixtures directly. In dirty case it look's like that:
for your fixtures/__init__.py:
if True:
import local as hybrids
else:
import cloud as hybrids
def update_path(module):
from sys import path
from os.path import join, pardir, abspath
mod_dir = abspath(join(module.__file__, pardir))
path.insert(0, mod_dir)
update_path(hybrids)
and in the client code (test_hybrids/test_hybrids.py) :
import fixtures
from foo import spam
spam()
In other cases you can use much more complex actions to perform a fake-move of all modules/packages/functions etc from your cloud/local folder directly into the fixture's __init__.py. Still, I think - it does not worth a try.
One more thing - black magic is not the best thing to use, I would recommend you to use a dotted notation with "import X from Y" - this is much more stable solution.
Use the pytest plugins feature and put your fixtures in separate modules. Then at runtime select which plug-in you’ll be drawing from via a command line argument or an environment variable. It needs to be something global because you need to place different pytest_plugins list assignments based on the global value.
Take a look at the section Conditional Plugins from this repo https://github.com/jxramos/pytest_behavior/tree/main/conditional_plugins
I was wondering if it were possible, and preferably not too difficult, to use Django DiscoverRunner to delete my media directory between every test, including once at the very beginning and once at the very end. I was particularly interested in the new attributes "test_suite" and "test_runner" that were introduced in Django 1.7 and was wondering if they would make this task easier.
I was also wondering how I can make the test specific MEDIA_ROOT a temporary file, currently I have a regular MEDIA_ROOT called "media" and a testing MEDIA_ROOT called "media_test" and I use rmtree in setup and tearDown of every test class that involves the media directory. The way I specify which MEDIA_ROOT to use is in my test.py settings file, currenly I just have:
MEDIA_ROOT = normpath(join(DJANGO_ROOT, 'media_test'))
Is there a way I can set MEDIA_ROOT to a temporary directory named "media" instead?
This question is a bit old, my answer is from Django 2.0 and Python 3.6.6 or later. Although I think the technique works on older versions too, YMMV.
I think this is a much more important question than it gets credit for! When you write good tests, its only a matter of time before you need to whip up test files, or generate test files. Either way, your in danger of polluting the File System of your server or developer machine. Neither is desirable!
I think the write up on this page is a best-practice. I'll copy/paste the code snippet below if you don't care about the reasoning (more notes afterwards):
----
First, let’s write a basic, really basic, model
from django.db import models
class Picture(models.Model):
picture = models.ImageField()
Then, let’s write a really, really basic, test.
from PIL import Image
import tempfile
from django.test import TestCase
from .models import Picture
from django.test import override_settings
def get_temporary_image(temp_file):
size = (200, 200)
color = (255, 0, 0, 0)
image = Image.new("RGBA", size, color)
image.save(temp_file, 'jpeg')
return temp_file
class PictureDummyTest(TestCase):
#override_settings(MEDIA_ROOT=tempfile.TemporaryDirectory(prefix='mediatest').name)
def test_dummy_test(self):
temp_file = tempfile.NamedTemporaryFile()
test_image = get_temporary_image(temp_file)
#test_image.seek(0)
picture = Picture.objects.create(picture=test_image.name)
print "It Worked!, ", picture.picture
self.assertEqual(len(Picture.objects.all()), 1)
----
I made one important change to the code snippet: TemporaryDirectory().name. The original snippet used gettempdir(). The TemporaryDirectory function creates a new folder with a system generated name every time its called. That folder will be removed by the OS - but we don't know when! This way, we get a new folder each run, so no chance of name conflicts. Note I had to add the .name element to get the name of the generated folder, since MEDIA_ROOT has to be a string. Finaly, I added prefix='mediatest' so all the generated folders are easy to identify in case I want to clean them up in a script.
Also potentially useful to you, is how the settings over-ride can be easy applied to a test class, not just one test function. See this page for details.
Also note in the comments after this article some people show an even easier way to get a temp file name without worrying about media settings using NamedTemporaryFile (only valid for tests that don't use Media settings!).
The answer by Richard Cooke works but leaves the temporary directories lingering in the file system, at least on Python 3.7 and Django 2.2. This can be avoided by using a combination of setUpClass, tearUpClass and overriding the settings in the test methods. For example:
import tempfile
class ExampleTestCase(TestCase):
temporary_dir = None
#classmethod
def setUpClass(cls):
cls.temporary_dir = tempfile.TemporaryDirectory()
super(ExampleTestCase, cls).setUpClass()
#classmethod
def tearDownClass(cls):
cls.temporary_dir = None
super(ExampleTestCase, cls).tearDownClass()
def test_example(self):
with self.settings(MEDIA_ROOT=self.temporary_dir.name):
# perform a test
pass
This way the temporary files are removed right away you don't need to worry about the name of the temporary directory either. (Of course, if you want you can still use the prefix argument in calling tempfile.TemporaryDirectory)
One solution I have found that works is to simply delete it in setUp / tearDown, I would prefer finding some way to make it automatically apply to all tests instead of having to put the logic in every test file that involves media, but I have not figured out how to do that yet.
The code I use is:
from shutil import rmtree
from django.conf import settings
from django.test import TestCase
class MyTests(TestCase):
def setUp(self):
rmtree(settings.MEDIA_ROOT, ignore_errors=True)
def tearDown(self):
rmtree(settings.MEDIA_ROOT, ignore_errors=True)
The reason I do it in both setUp and tearDown is because if I only have it in setUp I might end up with a lingering media_test directory, and even though it won't be checked in to GitHub by accident (it's in the .gitignore) it still takes up unnecessary space in my project explorer and I just prefer not having it sit there taking up space. If I only have it in tearDown then I risk causing problems if I quit out of the tests part way through and it tries to run a test that involves media while the media from the terminated test still lingers.
Something like that?
TESTING_MODE = True
...
MEDIA_ROOT = os.path.join(DJANGO_ROOT, 'media_test' if TESTING_MODE else 'media')