running tox 4 programmatically with custom plugin - python

I want to run python tox 4.4 with custom plugin. It seems to be simple
from tox.plugin.manager import MANAGER
from tox.run import main
import fooo
MANAGER.manager.register(fooo)
main([])
The problem is manager cleans all plugins:
tox.plugin.manager
load_plugins(self, path: Path) -> None:
for _plugin in self.manager.get_plugins(): # make sure we start with a clean state, repeated in memory run
self.manager.unregister(_plugin)
I can't find any official, documented way to do that (and I really do not want to monkey patch anything)

Related

Test if code is executed from within a py.test session

I'd like to connect to a different database if my code is running under py.test. Is there a function to call or an environment variable that I can test that will tell me if I'm running under a py.test session? What's the best way to handle this?
A simpler solution I came to:
import sys
if "pytest" in sys.modules:
...
Pytest runner will always load the pytest module, making it available in sys.modules.
Of course, this solution only works if the code you're trying to test does not use pytest itself.
There's also another way documented in the manual:
https://docs.pytest.org/en/latest/example/simple.html#pytest-current-test-environment-variable
Pytest will set the following environment variable PYTEST_CURRENT_TEST.
Checking the existence of said variable should reliably allow one to detect if code is being executed from within the umbrella of pytest.
import os
if "PYTEST_CURRENT_TEST" in os.environ:
# We are running under pytest, act accordingly...
Note
This method works only when an actual test is being run.
This detection will not work when modules are imported during pytest collection.
A solution came from RTFM, although not in an obvious place. The manual also had an error in code, corrected below.
Detect if running from within a pytest run
Usually it is a bad idea to make application code behave differently
if called from a test. But if you absolutely must find out if your
application code is running from a test you can do something like
this:
# content of conftest.py
def pytest_configure(config):
import sys
sys._called_from_test = True
def pytest_unconfigure(config):
import sys # This was missing from the manual
del sys._called_from_test
and then check for the sys._called_from_test flag:
if hasattr(sys, '_called_from_test'):
# called from within a test run
else:
# called "normally"
accordingly in your application. It’s also a good idea to use your own
application module rather than sys for handling flag.
Working with pytest==4.3.1 the methods above failed, so I just went old school and checked with:
script_name = os.path.basename(sys.argv[0])
if script_name in ['pytest', 'py.test']:
print('Running with pytest!')
While the hack explained in the other answer (http://pytest.org/latest/example/simple.html#detect-if-running-from-within-a-pytest-run) does indeed work, you could probably design the code in such a way you would not need to do this.
If you design the code to take the database to connect to as an argument somehow, via a connection or something else, then you can simply inject a different argument when you're running the tests then when the application drives this. Your code will end up with less global state and more modulare and reusable. So to me it sounds like an example where testing drives you to design the code better.
This could be done by setting an environment variable inside the testing code. For example, given a project
conftest.py
mypkg/
__init__.py
app.py
tests/
test_app.py
In test_app.py you can add
import os
os.environ['PYTEST_RUNNING'] = 'true'
And then you can check inside app.py:
import os
if os.environ.get('PYTEST_RUNNING', '') == 'true':
print('pytest is running')

Python: What's the right way to make modules visible to a TestRunner?

after watching a couple of presentations about django testing, I want to code my own TestRunner in order to skip django tests, and create better packages structures for my tests.
The problem is that we've changed the project structure and the test runner can't find the right path to do the tests discovery. This is how my project looks like:
project/
-src/
- project_name/
- apps/
- test/ # Not a good name, i know, will change it
- some_app/
- test_models.py
- manage.py
- development.db
Now, in order to test test_models.py I want to do this:
$ cd project/src/
$ python manage.py test some_app.test_models
The problem is that the test runner can't find that package (some_app) and module (test_models.py). It changes if I hardcode the name in the test runner, but i don't like to do it. Here's what I do to make it work.
test_labels = ["%s.%s" % ("project_name.test", l)
for l in test_labels
if not l.startswith("project_name.test")]
So, if you do
$ python manage.py test some_app.test_models
It will be rewritten to:
$ python manage.py test project_name.test.some_app.test_models
And that works fine.
I tried doing sys.path.append("(...)/project_name/test) but doesn't work neither.
This is the code of my TestRunner:
class DiscoveryDjangoTestSuiteRunner(DjangoTestSuiteRunner):
"""A test suite runner that uses unittest2 test discovery.
It's better than the default django test runner, becouse it
doesn't run Django tests and let you put your tests in different
packages, modules and classes.
To test everything in there:
$ ./manage.py test
To test a single package/module:
$ ./manage.py test package
$ ./manage.py test package.module
To test a single class:
$ ./manage.py test package.module.ClassName
"""
def build_suite(self, test_labels, extra_tests=None, **kwargs):
suite = None
discovery_root = settings.TEST_DISCOVERY_ROOT
if test_labels:
# This is where I append the path
suite = defaultTestLoader.loadTestsFromNames(test_labels)
# if single named module has no tests, do discovery within it
if not suite.countTestCases() and len(test_labels) == 1:
suite = None
discovery_root = import_module(test_labels[0]).__path__[0]
if suite is None:
suite = defaultTestLoader.discover(
discovery_root,
top_level_dir=settings.BASE_PATH,
)
if extra_tests:
for test in extra_tests:
suite.addTest(test)
return reorder_suite(suite, (TestCase,))
Your Python import hierarchy is rooted at project/src. Thus, the correct Python import path for your test_models module is project_name.test.some_app.test_models, so that's what I would expect to pass in as a test label.
But you don't like typing the project_name.test prefix every time you want to run a specific test module, since all your tests will be located there. That's fine: you're choosing to introduce some implicit non-obvious behavior in exchange for some convenience. You definitely should not add anything to sys.path in order to achieve this: the key to Python import sanity is having your import hierarchy for a given codebase rooted in one and exactly one place; overlapping sys.path entries will cause problems like doubled imports of the same module under different names.
Really all you want is a UI convenience, and it looks to me like the test-label-munging code you show is the obvious way to implement that convenience. You don't like having the project_name.test prefix hardcoded, but it's going to have to be hardcoded somewhere: there's no way the test runner is going to magically figure out that you want to prepend test labels with project_name.test. If you want your TestRunner to be more generic, you can pull it out into a setting like BASE_TEST_MODULE or some such and prepend the value of that setting to each test label.
Before you continue investing more time into your custom TestRunner, I would definitely recommend that you take a look at django-nose.
The custom test runner provided by django-nose implements nose's test runner which is extremely flexible and provides a lot of options for running your tests. It seamlessly overrides the default test management command and allows you to configure default test options in your project's settings module.
I'm really recommending it for several reasons:
The options for the test command are fully documented (take look at the output)
nose provides a lot of approaches for test discovery
Chances are your colleagues are already seasoned nose users
You didn't have to write the TestRunner class yourself

Yapsy: a simple hack to get rid of the plugin info file

I'd like to use a plugin system within my code. I've looked around for simple (yet powerful) python modules, and found Yapsy (among some others).
It is quite what I was looking for, but the way Yapsy discover plugins is not very flexible and require a plugin info file to be present. I'd like to get rid of it, without having to fork the code (if I start relying on Yapsy, I want to be sure I'll get all the updates from it without having to refork it each time).
I came out with this quick and dirty solution which is working fine, but do not improve the flexibility of the "discovering" process:
#!/usr/bin/env python
import os
import logging
from cStringIO import StringIO
from yapsy.PluginManager import PluginManager
from yapsy.IPlugin import IPlugin
from yapsy.PluginInfo import PluginInfo
class MyPluginManager(PluginManager):
"""
My attempt to get rid of the plugin info file...
"""
def __init__(self,
categories_filter={"Default":IPlugin},
directories_list=None,
plugin_info_ext="plugin.py"):
"""
Initialize the mapping of the categories and set the list of
directories where plugins may be. This can also be set by
direct call the methods:
- ``setCategoriesFilter`` for ``categories_filter``
- ``setPluginPlaces`` for ``directories_list``
- ``setPluginInfoExtension`` for ``plugin_info_ext``
You may look at these function's documentation for the meaning
of each corresponding arguments.
"""
self.setPluginInfoClass(PluginInfo)
self.setCategoriesFilter(categories_filter)
self.setPluginPlaces(directories_list)
self.setPluginInfoExtension(plugin_info_ext)
def _gatherCorePluginInfo(self, directory, filename):
"""
Gather the core information (name, and module to be loaded)
about a plugin described by it's info file (found at
'directory/filename').
Return an instance of ``self.plugin_info_cls`` and the
config_parser used to gather the core data *in a tuple*, if the
required info could be localised, else return ``(None,None)``.
.. note:: This is supposed to be used internally by subclasses
and decorators.
"""
# now we can consider the file as a serious candidate
candidate_infofile = os.path.join(directory,filename)
print candidate_infofile
# My hack : just create a StringIO file with basic plugin info
_fname = filename.rstrip(".py")
_file = StringIO()
_file.write("""[Core]
Name = %s
Module = %s
""" % (_fname, _fname))
_file.seek(0)
# parse the information file to get info about the plugin
name,moduleName,config_parser = self._getPluginNameAndModuleFromStream(_file, candidate_infofile)
print name, moduleName, config_parser
if (name,moduleName,config_parser)==(None,None,None):
return (None,None)
# start collecting essential info
plugin_info = self._plugin_info_cls(name,os.path.join(directory,moduleName))
return (plugin_info,config_parser)
This hack just assumes that the plugin has an extension ".plugin.py" (or ".plugin" for directory, but I did not test it). Then I create a cSringIO file to fool Yapsy and make it think he found a plugin info file. (One can still provide additional informations in the plugin by setting the proper variables: author, description...).
I'm wondering if there is a better way or if people have already done that. This hack is clearly too rough to be really useful, and I'd like to have something more flexible: a plugin may be discovered by its plugin info file (as in the original code) or by a pattern for the plugin name (probably using re, allowing the usage of prefix, suffix...). As far as I see, having these ideas implemented would require a much more complex hack than what I've already done...
Ok, I've implemented a fork of the Yapsy plugin manager, and am actually in touch with the author of the package. As soon as the documentation and tests are done, I think this may be included in the next release of Yapsy.

If module 'example' contains both function 'run' and submodule 'run', can I count on 'from example import run' to always import the former?

If I have a Python module implemented as a directory (i.e. package) that has both a top level function run and a submodule run, can I count on from example import run to always import the function? Based on my tests that is the case at least with Python 2.6 and Jython 2.5 on Linux, but can I count on this generally? I tried to search information about the import priorities but couldn't find anything.
Background:
I have a pretty large package that people generally run as a tool from the command line but also sometimes use programmatically. I would like to have simple entry points for both usages and consider to implement them like this:
example/__init__.py:
def run(*args):
print args # real application code belongs here
example/run.py:
import sys
from example import run
run(*sys.argv[1:])
The first entry point allows users to access the module from Python like this:
from example import run
run(args)
The latter entry point allows users to execute the module from the command line using both of the approaches below:
python -m example.run args
python path/to/example/run.py args
This both works great and covers everything I need. Before taking this into real use, I would like to know is this a sound approach that I can expect to work with all Python implementations on all operating systems.
I think this should always work; the function definition will shadow the module.
However, this also strikes me as a dirty hack. The clean way to do this would be
# __init__.py
# re-export run.run as run
from .run import run
i.e., a minimal __init__.py, with all the running logic in run.py:
# run.py
def run(*args):
print args # real application code belongs here
if __name__ == "__main__":
run(*sys.argv[1:])

Bundle a Python app as a single file to support add-ons or extensions?

There are several utilities — all with different procedures, limitations, and target operating systems — for getting a Python package and all of its dependencies and turning them into a single binary program that is easy to ship to customers:
http://wiki.python.org/moin/Freeze
http://www.pyinstaller.org/
http://www.py2exe.org/
http://svn.pythonmac.org/py2app/py2app/trunk/doc/index.html
My situation goes one step further: third-party developers will be wanting to write plug-ins, extensions, or add-ons for my application. It is, of course, a daunting question how users on platforms like Windows would most easily install plugins or addons in such a way that my app can easily discover that they have been installed. But beyond that basic question is another: how can a third-party developer bundle their extension with whatever libraries the extension itself needs (which might be binary modules, like lxml) in such a way that the plugin's dependencies become available for import at the same time that the plugin becomes available.
How can this be approached? Will my application need its own plug-in area on disk and its own plug-in registry to make this tractable? Or are there general mechanisms, that I could avoid writing myself, that would allow an app that is distributed as a single executable to look around and find plugins that are also installed as single files?
You should be able to have a plugins directory that your application scans at runtime (or later) to import the code in question. Here's an example that should work with regular .py or .pyc code that even works with plugins stored inside zip files (so users could just drop someplugin.zip in the 'plugins' directory and have it magically work):
import re, os, sys
class Plugin(object):
"""
The base class from which all plugins are derived. It is used by the
plugin loading functions to find all the installed plugins.
"""
def __init__(self, foo):
self.foo = foo
# Any useful base plugin methods would go in here.
def get_plugins(plugin_dir):
"""Adds plugins to sys.path and returns them as a list"""
registered_plugins = []
#check to see if a plugins directory exists and add any found plugins
# (even if they're zipped)
if os.path.exists(plugin_dir):
plugins = os.listdir(plugin_dir)
pattern = ".py$"
for plugin in plugins:
plugin_path = os.path.join(plugin_dir, plugin)
if os.path.splitext(plugin)[1] == ".zip":
sys.path.append(plugin_path)
(plugin, ext) = os.path.splitext(plugin) # Get rid of the .zip extension
registered_plugins.append(plugin)
elif plugin != "__init__.py":
if re.search(pattern, plugin):
(shortname, ext) = os.path.splitext(plugin)
registered_plugins.append(shortname)
if os.path.isdir(plugin_path):
plugins = os.listdir(plugin_path)
for plugin in plugins:
if plugin != "__init__.py":
if re.search(pattern, plugin):
(shortname, ext) = os.path.splitext(plugin)
sys.path.append(plugin_path)
registered_plugins.append(shortname)
return registered_plugins
def init_plugin_system(cfg):
"""
Initializes the plugin system by appending all plugins into sys.path and
then using load_plugins() to import them.
cfg - A dictionary with two keys:
plugin_path - path to the plugin directory (e.g. 'plugins')
plugins - List of plugin names to import (e.g. ['foo', 'bar'])
"""
if not cfg['plugin_path'] in sys.path:
sys.path.insert(0, cfg['plugin_path'])
load_plugins(cfg['plugins'])
def load_plugins(plugins):
"""
Imports all plugins given a list.
Note: Assumes they're all in sys.path.
"""
for plugin in plugins:
__import__(plugin, None, None, [''])
if plugin not in Plugin.__subclasses__():
# This takes care of importing zipped plugins:
__import__(plugin, None, None, [plugin])
So lets say I have a plugin named "foo.py" in a directory called 'plugins' (that is in the base dir of my app) that will add a new capability to my application. The contents might look like this:
from plugin_stuff import Plugin
class Foo(Plugin):
"""An example plugin."""
self.menu_entry = {'Tools': {'Foo': self.bar}}
def bar(self):
return "foo plugin!"
I could initialize my plugins when I launch my app like so:
plugin_dir = "%s/plugins" % os.getcwd()
plugin_list = get_plugins(plugin_dir)
init_plugin_system({'plugin_path': plugin_dir, 'plugins': plugin_list})
plugins = find_plugins()
plugin_menu_entries = []
for plugin in plugins:
print "Enabling plugin: %s" % plugin.__name__
plugin_menu_entries.append(plugin.menu_entry))
add_menu_entries(plugin_menu_entries) # This is an imaginary function
That should work as long as the plugin is either a .py or .pyc file (assuming it is byte-compiled for the platform in question). It can be standalone file or inside of a directory with an init.py or inside of a zip file with the same rules.
How do I know this works? It is how I implemented plugins in PyCI. PyCI is a web application but there's no reason why this method wouldn't work for a regular ol' GUI. For the example above I chose to use an imaginary add_menu_entries() function in conjunction with a Plugin object variable that could be used to add a plugin's methods to your GUI's menus.
Hopefully this answer will help you build your own plugin system. If you want to see precisely how it is implemented I recommend you download the PyCI source code and look at plugin_utils.py and the Example plugin in the plugins_enabled directory.
Here is another example of a Python app that uses plugins: OpenSTV. Here, the plugins can only be Python modules.

Categories

Resources