Reliable way to get the "build" directory from within setup.py - python

Inside the setup.py script I need to create some temporary files for the installation. The natural place to put them would be the "build/" directory.
Is there a way to retrieve its path that works if installing via pypi, from source, easy_install, pip, ...?
Thanks a lot!

By default distutils create build/ in current working dir, but it can be changed by argument --build-base. Seems like distutils parses it when executing setup and parsed argument does not accessible from outside, but you can cut it yourself:
import sys
build_base_long = [arg[12:].strip("= ") for arg in sys.argv if arg.startswith("--build-base")]
build_base_short = [arg[2:].strip(" ") for arg in sys.argv if arg.startswith("-b")]
build_base_arg = build_base_long or build_base_short
if build_base_arg:
build_base = build_base_arg[0]
else:
build_base = "."
This naive version of parser still shorter than optparse's version with proper error handling for unknown flags. Also you can use argparse's parser, which have try_parse method.

distutils/setuptools provide an abstract Command class that users can use to add custom commands to their package's setup process. This is the same class that built-in setup commands like build and install are subclasses of.
Every class that is a subclass of the abstract Command class must implement the initialize_options, finalize_options, and run methods. The "options" these method names refer to are class attributes that are derived from command-line arguments provided by the user (they can also have default values). The initialize_options method is where a class's options are defined, the finalize_options method is where a class's option values are assigned, and the run method is where a class's option values are used to perform the function of the command.
Since command-line arguments may affect more than one command, some command classes may share options with other command classes. For example, all the distutils/setuptools build commands (build, build_py, build_clib, build_ext, and build_scripts) and the install command need to know where the build directory is. Instead of having every one of these command classes define and parse the same command-line arguments into the same options, the build command, which is the first of all these commands to be executed, defines and parses the command-line arguments and options, and all the other classes get the option values from the build command in their finalize_options method.
For example, the build class defines the build_base and build_lib options in its initialize_options method and then computes their values from the command-line arguments in its finalize_options method. The install classes also defines the build_base and build_lib options in its initialize_options method but it gets the values for these options from the build command in its finalize_options method.
You can use the same pattern to add a custom sub-command to the build command as follows (it would be similar for install)
import setuptools
from distutils.command.build import build
class BuildSomething(setuptools.Command):
def initialize_options(self):
# define the command's options
self.build_base = None
self.build_lib = None
def finalize_options(self):
# get the option values from the build command
self.set_undefined_options('build',
('build_base', 'build_base'),
('build_lib', 'build_lib'))
def run(self):
# do something with the option values
print(self.build_base) # defaults to 'build'
print(self.build_lib)
build_something_command = 'build_something'
class Build(build):
def has_something(self):
# update this to check if your build should run
return True
sub_commands = [(build_something_command, has_something)] + build.sub_commands
COMMAND_CLASS = {
build_something_command: BuildSomething, # custom command
'build': Build # override distutils/setuptools build command
}
setuptools.setup(cmdclass=COMMAND_CLASS)
Alternatively, you could just subclass one of the distutils/setuptools classes if you just want to extend its functionality and it already has the options you need
import setuptools
from setuptools.command.build_py import build_py
class BuildPy(build_py):
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
# do something with the option values
print(self.build_lib) # inherited from build_py
build_py.run(self) # make sure the regular build_py still runs
COMMAND_CLASS = {
'build_py': BuildPy # override distutils/setuptools build_py command
}
setuptools.setup(cmdclass=COMMAND_CLASS)
Unfortunately, none of this is very well documented anywhere. I learned most of it from reading the distutils and setuptools source code. Any of the build*.py and install*.py files in either repository's command directory is informative. The abstract Command class is defined in distutils.

perhaps something like this? works in my case with python 3.8
...
from distutils.command.build import get_platform
import sys
import os
...
def configuration(parent_package='', top_path=None):
config = Configuration('', parent_package, top_path)
# add xxx library
config.add_library('xxx',['xxx/src/fil1.F90',
'xxx/src/file2.F90',
'xxx/src/file3.F90'],
language='f90')
# check for the temporary build directory option
_tempopt = None
_chkopt = ('-t','--build-temp')
for _opt in _chkopt:
if _opt in sys.argv:
_i = sys.argv.index(_opt)
if _i < len(sys.argv)-1:
_tempopt = sys.argv[_i+1]
break
# check for the base directory option
_buildopt = 'build'
_chkopt = ('-b','--build-base')
for _opt in _chkopt:
if _opt in sys.argv:
_i = sys.argv.index(_opt)
if _i < len(sys.argv)-1:
_buildopt = sys.argv[_i+1]
break
if _tempopt is None:
# works with python3 (check distutils/command/build.py)
platform_specifier = ".%s-%d.%d" % (get_platform(), *sys.version_info[:2])
_tempopt = '%s%stemp%s'%(_buildopt,os.sep,platform_specifier)
# add yyy module (wraps fortran code in xxx library)
config.add_extension('fastpost',sources=['yyy/src/fastpost.f90'],
f2py_options=['--quiet'],
libraries=['xxx'])
# to access the mod files produced from fortran modules comppilaton, add
# the temp build directory to the include directories of the configuration
config.add_include_dirs(_tempopt)
return config
setup(name="pysimpp",
version="0.0.1",
description="xxx",
author="xxx",
author_email="xxx#yyy",
configuration=configuration,)

Related

Any "fromfile_prefix_chars" equivalent for Python-Click

Is there any equivalent in Click (https://click.palletsprojects.com/) for the fromfile_prefix_chars option that is available in the argparse library?
I sometimes have lot of arguments to hand over to a Click based application and - especially when using file paths - may reach some Windows limits. Therefore, when using argparse, the solution was to use the fromfile_prefix_chars option. However, we will move to Click now.
Is there any equivalent available or is it possible to replicate this functionality in Click?
Many thanks for your help.
Best regards,
gpxricky
Yes, there is. You can use default_map to override defaults
You could do it in various ways:
Pass file with python dictionary
You could parse a python dictionary with ast.literal_eval:
import ast
import click
import os
#click.group()
#click.pass_context
def main(ctx):
config = os.getenv('CLICK_CONFIG_FILE', './click_config')
if os.path.exists(config):
with open(config) as f:
ctx.default_map = ast.literal_eval(f.read())
#main.command()
#click.option("--param", default=2)
def test(param):
print(param)
if __name__ == '__main__':
main()
Let's assume we have two config files:
# click_config
{
'test': {'param': 3}
}
# config_click
{
'test': {'param': 4}
}
Now, here is what happens when you call your command:
# Highest precedence, overrides any config file
$ python main.py test --param 1
1
# No config file exists. Takes the default, defined in the command
$ python main.py test
2
# Default config file exists, overrides default to 3
$ python main.py test
3
# Custom config file provided, overrides default to 4
$ CLICK_CONFIG_FILE=./config_click python main.py test
4
# Again. The command line has the highest precedence:
$ CLICK_CONFIG_FILE=./config_click python main.py test --param 1
1
Pass yaml config file
You could follow this answer here to do the same thing with yaml.
Pass ini file
Here you can find an article explaining how to adopt ini files.
Use an extension (the config format is again ini)
Check this out.

passing a URL to test against with nose testconfig

is there a way to configure test-config so that i can pass a URL in command line i.e. 'nosetests --tc=systest_url test_suit.py'
I need to run my selenium tests against dev and systest environments when performing builds on teamcity. Our team decided to use python for UI tests and Im more a Java guy and I'm tring to figure out how the plugin works it looks like i can store the url in yaml and pass the file to the --tc command but doesn't seem to work
the code i inherited looks like this :
URL = config['test' : 'https://www.google.com', ]
class BaseTestCase(unittest.TestCase, Navigation):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Firefox()
cls.driver.implicitly_wait(5)
cls.driver.maximize_window()
cls.driver.get(URL)
which obviously is not working
There is a plugin for nose, the nosetest-config. You can specify some config file and pass filename to --tc-file arg from nose.
config.ini
[myapp_servers]
main_server = 10.1.1.1
secondary_server = 10.1.1.2
In your test file you can load the config.
test_app.py
from testconfig import config
def test_foo():
main_server = config['myapp_servers']['main_server']
Than, call nose with args
nosetests -s --tc-file example_cfg.ini
As described in docs, you can use other config files type like YAML, JSON or even Python modules.
Using the nose-testconfig --tc option you can override a configuration value. Separate the key from the value using a colon. For example,
nosetests test.py --tc=url:dev.example.com
will make the value available in config['url'].
from testconfig import config
def test_url_is_dev():
assert 'dev' in config['url']

How to run_command at setuptools with options?

I have two custom commands at my setup.py file: create_tables and drop_tables:
class create_tables(command):
description = 'create DB tables'
user_options = [
('database=', 'd', 'which database configuration use'),
('reset', 'r', 'reset all data previously'),
]
def initialize_options(self):
command.initialize_options(self)
self.reset = False
def run(self):
if self.reset:
self.run_command('drop_tables')
else:
command.run(self)
from vk_relations import models
models.create_tables()
print 'Tables were created successfully'
class drop_tables(command):
description = 'drop all created DB tables'
user_options = [
('database=', 'd', 'which database configuration use'),
]
def run(self):
command.run(self)
answer = raw_input('Are you sure you want to clear all VK Relations data? (y/n): ')
if 'y' == answer:
from vk_relations import models
models.drop_tables()
print 'Tables were dropped successfully'
elif 'n' == answer:
quit()
else:
sys.exit()
Command $ setup.py create_tables -r -dmain should run command drop_tables and create new tables at main database, but run_command method doesn't allow to provide options to the command. How to specify option database for drop_tables inside create_tables command?
Right now I've used this hack:
cmd_obj = self.distribution.get_command_obj('drop_tables')
cmd_obj.database = self.database
self.run_command('drop_tables')
The "proper" solution
Setting attributes on the objects will fail for the pre-defined targets such as build.
The closest you can get here to a proper solution is following:
class drop_tables(command): # <-- Note this should come from drop_tables command
def finalize_options(self):
self.set_undefined_options("create_tables", ("database", "database"))
This is the approach used to inherit arguments from build to build_py and other subcommands.
On the build command
I do not fancy the circular reference the authors of the distutils package introduced in the build command.
The execution order there is following: the build command calls the build_py subcommand. The subcommand goes back to the build command and gets the parameters that are left undefined. This makes tight coupling because both commands need to know about each other.
Also if another aggregate command were to be added the ambiguity would be introduced - the build_py would have two sources of parameters to inherit.
The approach that would reduce the coupling should be different.
If the build command is an aggregate command and therefore it should handle all the parameter passing to its subcommands.
class build(command):
...
def finalize_options(self):
for cmd_name in self.get_sub_commands():
cmd_obj = self.distribution.get_command_obj(cmd_name)
cmd_obj.set_undefined_options("build", ("build_lib", "build_lib"), ...)
Now there's no need to pass command by name and we can use instance instead. This would also resolve the infinite recursion in set_undefined_options > ensure_finalized > finalize_options > set_undefined_options
The proper solution
Given the current state of affairs the better solutions for you problem would be:
class create_tables(command):
def run(self):
cmd_obj = self.distribution.get_command_obj("drop_tables")
cmd_obj.set_undefined_options("create_tables", ("database", "database"))
self.run_command("drop_tables")

Can python distutils compile CUDA code?

I have CUDA code which I want to build a dynamic library to Python using distutils. But it seems distutils doesn't recognize ".cu" file even if the "nvcc" compiler is installed. Not sure how to get it done.
Distutils is not able to compile CUDA by default, because it doesn't support using multiple compilers simultaneously. By default, it sets to compiler just based on your platform, not on the type of source code you have.
I have an example project on github that contains some monkey patches into distutils to hack in support for this. The example project is a C++ class that manages a some GPU memory and a CUDA kernel, wrapped in swig, and all compiled with just python setup.py install. The focus is on array operations, so we're also using numpy. All the kernel does for this example project is increment each element in an array by one.
The code is here: https://github.com/rmcgibbo/npcuda-example. Here's the setup.py script. The key to whole code is customize_compiler_for_nvcc().
import os
from os.path import join as pjoin
from setuptools import setup
from distutils.extension import Extension
from distutils.command.build_ext import build_ext
import subprocess
import numpy
def find_in_path(name, path):
"Find a file in a search path"
#adapted fom http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/
for dir in path.split(os.pathsep):
binpath = pjoin(dir, name)
if os.path.exists(binpath):
return os.path.abspath(binpath)
return None
def locate_cuda():
"""Locate the CUDA environment on the system
Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64'
and values giving the absolute path to each directory.
Starts by looking for the CUDAHOME env variable. If not found, everything
is based on finding 'nvcc' in the PATH.
"""
# first check if the CUDAHOME env variable is in use
if 'CUDAHOME' in os.environ:
home = os.environ['CUDAHOME']
nvcc = pjoin(home, 'bin', 'nvcc')
else:
# otherwise, search the PATH for NVCC
nvcc = find_in_path('nvcc', os.environ['PATH'])
if nvcc is None:
raise EnvironmentError('The nvcc binary could not be '
'located in your $PATH. Either add it to your path, or set $CUDAHOME')
home = os.path.dirname(os.path.dirname(nvcc))
cudaconfig = {'home':home, 'nvcc':nvcc,
'include': pjoin(home, 'include'),
'lib64': pjoin(home, 'lib64')}
for k, v in cudaconfig.iteritems():
if not os.path.exists(v):
raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v))
return cudaconfig
CUDA = locate_cuda()
# Obtain the numpy include directory. This logic works across numpy versions.
try:
numpy_include = numpy.get_include()
except AttributeError:
numpy_include = numpy.get_numpy_include()
ext = Extension('_gpuadder',
sources=['src/swig_wrap.cpp', 'src/manager.cu'],
library_dirs=[CUDA['lib64']],
libraries=['cudart'],
runtime_library_dirs=[CUDA['lib64']],
# this syntax is specific to this build system
# we're only going to use certain compiler args with nvcc and not with gcc
# the implementation of this trick is in customize_compiler() below
extra_compile_args={'gcc': [],
'nvcc': ['-arch=sm_20', '--ptxas-options=-v', '-c', '--compiler-options', "'-fPIC'"]},
include_dirs = [numpy_include, CUDA['include'], 'src'])
# check for swig
if find_in_path('swig', os.environ['PATH']):
subprocess.check_call('swig -python -c++ -o src/swig_wrap.cpp src/swig.i', shell=True)
else:
raise EnvironmentError('the swig executable was not found in your PATH')
def customize_compiler_for_nvcc(self):
"""inject deep into distutils to customize how the dispatch
to gcc/nvcc works.
If you subclass UnixCCompiler, it's not trivial to get your subclass
injected in, and still have the right customizations (i.e.
distutils.sysconfig.customize_compiler) run on it. So instead of going
the OO route, I have this. Note, it's kindof like a wierd functional
subclassing going on."""
# tell the compiler it can processes .cu
self.src_extensions.append('.cu')
# save references to the default compiler_so and _comple methods
default_compiler_so = self.compiler_so
super = self._compile
# now redefine the _compile method. This gets executed for each
# object but distutils doesn't have the ability to change compilers
# based on source extension: we add it.
def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts):
if os.path.splitext(src)[1] == '.cu':
# use the cuda for .cu files
self.set_executable('compiler_so', CUDA['nvcc'])
# use only a subset of the extra_postargs, which are 1-1 translated
# from the extra_compile_args in the Extension class
postargs = extra_postargs['nvcc']
else:
postargs = extra_postargs['gcc']
super(obj, src, ext, cc_args, postargs, pp_opts)
# reset the default compiler_so, which we might have changed for cuda
self.compiler_so = default_compiler_so
# inject our redefined _compile method into the class
self._compile = _compile
# run the customize_compiler
class custom_build_ext(build_ext):
def build_extensions(self):
customize_compiler_for_nvcc(self.compiler)
build_ext.build_extensions(self)
setup(name='gpuadder',
# random metadata. there's more you can supploy
author='Robert McGibbon',
version='0.1',
# this is necessary so that the swigged python file gets picked up
py_modules=['gpuadder'],
package_dir={'': 'src'},
ext_modules = [ext],
# inject our custom trigger
cmdclass={'build_ext': custom_build_ext},
# since the package has c code, the egg cannot be zipped
zip_safe=False)
As an alternative to distutils/setuptools, you could use scikit-build (along with CMakeLists.txt, pyproject.toml, and setup.cfg/setup.py):
import sys
from pathlib import Path
from skbuild import setup
from setuptools import find_packages
# https://github.com/scikit-build/scikit-build/issues/521#issuecomment-753035688
for i in (Path(__file__).resolve().parent / "_skbuild").rglob("CMakeCache.txt"):
i.write_text(re.sub("^//.*$\n^[^#].*pip-build-env.*$", "", i.read_text(), flags=re.M))
setup(cmake_args=[f"-DPython3_ROOT_DIR={sys.prefix}"],
packages=find_packages(exclude=["tests"]))

How do I run all Python unit tests in a directory?

I have a directory that contains my Python unit tests. Each unit test module is of the form test_*.py. I am attempting to make a file called all_test.py that will, you guessed it, run all files in the aforementioned test form and return the result. I have tried two methods so far; both have failed. I will show the two methods, and I hope someone out there knows how to actually do this correctly.
For my first valiant attempt, I thought "If I just import all my testing modules in the file, and then call this unittest.main() doodad, it will work, right?" Well, turns out I was wrong.
import glob
import unittest
testSuite = unittest.TestSuite()
test_file_strings = glob.glob('test_*.py')
module_strings = [str[0:len(str)-3] for str in test_file_strings]
if __name__ == "__main__":
unittest.main()
This did not work, the result I got was:
$ python all_test.py
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
For my second try, I though, ok, maybe I will try to do this whole testing thing in a more "manual" fashion. So I attempted to do that below:
import glob
import unittest
testSuite = unittest.TestSuite()
test_file_strings = glob.glob('test_*.py')
module_strings = [str[0:len(str)-3] for str in test_file_strings]
[__import__(str) for str in module_strings]
suites = [unittest.TestLoader().loadTestsFromName(str) for str in module_strings]
[testSuite.addTest(suite) for suite in suites]
print testSuite
result = unittest.TestResult()
testSuite.run(result)
print result
#Ok, at this point I have a result
#How do I display it as the normal unit test command line output?
if __name__ == "__main__":
unittest.main()
This also did not work, but it seems so close!
$ python all_test.py
<unittest.TestSuite tests=[<unittest.TestSuite tests=[<unittest.TestSuite tests=[<test_main.TestMain testMethod=test_respondes_to_get>]>]>]>
<unittest.TestResult run=1 errors=0 failures=0>
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I seem to have a suite of some sort, and I can execute the result. I am a little concerned about the fact that it says I have only run=1, seems like that should be run=2, but it is progress. But how do I pass and display the result to main? Or how do I basically get it working so I can just run this file, and in doing so, run all the unit tests in this directory?
With Python 2.7 and higher you don't have to write new code or use third-party tools to do this; recursive test execution via the command line is built-in. Put an __init__.py in your test directory and:
python -m unittest discover <test_directory>
# or
python -m unittest discover -s <directory> -p '*_test.py'
You can read more in the python 2.7
or python 3.x unittest documentation.
Update for 2021:
Lots of modern python projects use more advanced tools like pytest. For example, pull down matplotlib or scikit-learn and you will see they both use it.
It is important to know about these newer tools because when you have more than 7000 tests you need:
more advanced ways to summarize what passes, skipped, warnings, errors
easy ways to see how they failed
percent complete as it is running
total run time
ways to generate a test report
etc etc
In python 3, if you're using unittest.TestCase:
You must have an empty (or otherwise) __init__.py file in your test directory (must be named test/)
Your test files inside test/ match the pattern test_*.py.
They can be inside a subdirectory under test/. Those subdirs can be named as anything, but they all need to have an __init__.py file in them
Then, you can run all the tests with:
python -m unittest
Done! A solution less than 100 lines. Hopefully another python beginner saves time by finding this.
You could use a test runner that would do this for you. nose is very good for example. When run, it will find tests in the current tree and run them.
Updated:
Here's some code from my pre-nose days. You probably don't want the explicit list of module names, but maybe the rest will be useful to you.
testmodules = [
'cogapp.test_makefiles',
'cogapp.test_whiteutils',
'cogapp.test_cogapp',
]
suite = unittest.TestSuite()
for t in testmodules:
try:
# If the module defines a suite() function, call it to get the suite.
mod = __import__(t, globals(), locals(), ['suite'])
suitefn = getattr(mod, 'suite')
suite.addTest(suitefn())
except (ImportError, AttributeError):
# else, just load all the test cases from the module.
suite.addTest(unittest.defaultTestLoader.loadTestsFromName(t))
unittest.TextTestRunner().run(suite)
This is now possible directly from unittest: unittest.TestLoader.discover.
import unittest
loader = unittest.TestLoader()
start_dir = 'path/to/your/test/files'
suite = loader.discover(start_dir)
runner = unittest.TextTestRunner()
runner.run(suite)
Well by studying the code above a bit (specifically using TextTestRunner and defaultTestLoader), I was able to get pretty close. Eventually I fixed my code by also just passing all test suites to a single suites constructor, rather than adding them "manually", which fixed my other problems. So here is my solution.
import glob
import unittest
test_files = glob.glob('test_*.py')
module_strings = [test_file[0:len(test_file)-3] for test_file in test_files]
suites = [unittest.defaultTestLoader.loadTestsFromName(test_file) for test_file in module_strings]
test_suite = unittest.TestSuite(suites)
test_runner = unittest.TextTestRunner().run(test_suite)
Yeah, it is probably easier to just use nose than to do this, but that is besides the point.
If you want to run all the tests from various test case classes and you're happy to specify them explicitly then you can do it like this:
from unittest import TestLoader, TextTestRunner, TestSuite
from uclid.test.test_symbols import TestSymbols
from uclid.test.test_patterns import TestPatterns
if __name__ == "__main__":
loader = TestLoader()
tests = [
loader.loadTestsFromTestCase(test)
for test in (TestSymbols, TestPatterns)
]
suite = TestSuite(tests)
runner = TextTestRunner(verbosity=2)
runner.run(suite)
where uclid is my project and TestSymbols and TestPatterns are subclasses of TestCase.
I have used the discover method and an overloading of load_tests to achieve this result in a (minimal, I think) number lines of code:
def load_tests(loader, tests, pattern):
''' Discover and load all unit tests in all files named ``*_test.py`` in ``./src/``
'''
suite = TestSuite()
for all_test_suite in unittest.defaultTestLoader.discover('src', pattern='*_tests.py'):
for test_suite in all_test_suite:
suite.addTests(test_suite)
return suite
if __name__ == '__main__':
unittest.main()
Execution on fives something like
Ran 27 tests in 0.187s
OK
I tried various approaches but all seem flawed or I have to makeup some code, that's annoying. But there's a convinient way under linux, that is simply to find every test through certain pattern and then invoke them one by one.
find . -name 'Test*py' -exec python '{}' \;
and most importantly, it definitely works.
In case of a packaged library or application, you don't want to do it. setuptools will do it for you.
To use this command, your project’s tests must be wrapped in a unittest test suite by either a function, a TestCase class or method, or a module or package containing TestCase classes. If the named suite is a module, and the module has an additional_tests() function, it is called and the result (which must be a unittest.TestSuite) is added to the tests to be run. If the named suite is a package, any submodules and subpackages are recursively added to the overall test suite.
Just tell it where your root test package is, like:
setup(
# ...
test_suite = 'somepkg.test'
)
And run python setup.py test.
File-based discovery may be problematic in Python 3, unless you avoid relative imports in your test suite, because discover uses file import. Even though it supports optional top_level_dir, but I had some infinite recursion errors. So a simple solution for a non-packaged code is to put the following in __init__.py of your test package (see load_tests Protocol).
import unittest
from . import foo, bar
def load_tests(loader, tests, pattern):
suite = unittest.TestSuite()
suite.addTests(loader.loadTestsFromModule(foo))
suite.addTests(loader.loadTestsFromModule(bar))
return suite
This is an old question, but what worked for me now (in 2019) is:
python -m unittest *_test.py
All my test files are in the same folder as the source files and they end with _test.
I use PyDev/LiClipse and haven't really figured out how to run all tests at once from the GUI. (edit: you right click the root test folder and choose Run as -> Python unit-test
This is my current workaround:
import unittest
def load_tests(loader, tests, pattern):
return loader.discover('.')
if __name__ == '__main__':
unittest.main()
I put this code in a module called all in my test directory. If I run this module as a unittest from LiClipse then all tests are run. If I ask to only repeat specific or failed tests then only those tests are run. It doesn't interfere with my commandline test runner either (nosetests) -- it's ignored.
You may need to change the arguments to discover based on your project setup.
Based on the answer of Stephen Cagle I added support for nested test modules.
import fnmatch
import os
import unittest
def all_test_modules(root_dir, pattern):
test_file_names = all_files_in(root_dir, pattern)
return [path_to_module(str) for str in test_file_names]
def all_files_in(root_dir, pattern):
matches = []
for root, dirnames, filenames in os.walk(root_dir):
for filename in fnmatch.filter(filenames, pattern):
matches.append(os.path.join(root, filename))
return matches
def path_to_module(py_file):
return strip_leading_dots( \
replace_slash_by_dot( \
strip_extension(py_file)))
def strip_extension(py_file):
return py_file[0:len(py_file) - len('.py')]
def replace_slash_by_dot(str):
return str.replace('\\', '.').replace('/', '.')
def strip_leading_dots(str):
while str.startswith('.'):
str = str[1:len(str)]
return str
module_names = all_test_modules('.', '*Tests.py')
suites = [unittest.defaultTestLoader.loadTestsFromName(mname) for mname
in module_names]
testSuite = unittest.TestSuite(suites)
runner = unittest.TextTestRunner(verbosity=1)
runner.run(testSuite)
The code searches all subdirectories of . for *Tests.py files which are then loaded. It expects each *Tests.py to contain a single class *Tests(unittest.TestCase) which is loaded in turn and executed one after another.
This works with arbitrary deep nesting of directories/modules, but each directory in between needs to contain an empty __init__.py file at least. This allows the test to load the nested modules by replacing slashes (or backslashes) by dots (see replace_slash_by_dot).
I just created a discover.py file in my base test directory and added import statements for anything in my sub directories. Then discover is able to find all my tests in those directories by running it on discover.py
python -m unittest discover ./test -p '*.py'
# /test/discover.py
import unittest
from test.package1.mod1 import XYZTest
from test.package1.package2.mod2 import ABCTest
...
if __name__ == "__main__"
unittest.main()
Encountered the same issue.
The solution is to add an empty __init__.py to each folder and uses python -m unittest discover -s
Project Structure
tests/
__init__.py
domain/
value_object/
__init__.py
test_name.py
__init__.py
presentation/
__init__.py
test_app.py
And running the command
python -m unittest discover -s tests/domain
To get the expected outcome
.
----------------------------------------------------------------------
Ran 1 test in 0.007s
Because Test discovery seems to be a complete subject, there is some dedicated framework to test discovery :
nose
Py.Test
More reading here : https://wiki.python.org/moin/PythonTestingToolsTaxonomy
This BASH script will execute the python unittest test directory from ANYWHERE in the file system, no matter what working directory you are in: its working directory always be where that test directory is located.
ALL TESTS, independent $PWD
unittest Python module is sensitive to your current directory, unless you tell it where (using discover -s option).
This is useful when staying in the ./src or ./example working directory and you need a quick overall unit test:
#!/bin/bash
this_program="$0"
dirname="`dirname $this_program`"
readlink="`readlink -e $dirname`"
python -m unittest discover -s "$readlink"/test -v
SELECTED TESTS, independent $PWD
I name this utility file: runone.py and use it like this:
runone.py <test-python-filename-minus-dot-py-fileextension>
#!/bin/bash
this_program="$0"
dirname="`dirname $this_program`"
readlink="`readlink -e $dirname`"
(cd "$dirname"/test; python -m unittest $1)
No need for a test/__init__.py file to burden your package/memory-overhead during production.
I have no package and as mentioned on this page, this is creating issue while issing dicovery. So, I used the following solution. All the test result will be put in a given output folder.
RunAllUT.py:
"""
The given script is executing all the Unit Test of the project stored at the
path %relativePath2Src% currently fixed coded for the given project.
Prerequired:
- Anaconda should be install
- For the current user, an enviornment called "mtToolsEnv" should exists
- xmlrunner Library should be installed
"""
import sys
import os
import xmlrunner
from Repository import repository
relativePath2Src="./../.."
pythonPath=r'"C:\Users\%USERNAME%\.conda\envs\YourConfig\python.exe"'
outputTestReportFolder=os.path.dirname(os.path.abspath(__file__))+r'\test-reports' #subfolder in current file path
class UTTesting():
"""
Class tto run all the UT of the project
"""
def __init__(self):
"""
Initiate instance
Returns
-------
None.
"""
self.projectRepository = repository()
self.UTfile = [] #List all file
def retrieveAllUT(self):
"""
Generate the list of UT file in the project
Returns
-------
None.
"""
print(os.path.realpath(relativePath2Src))
self.projectRepository.retriveAllFilePaths(relativePath2Src)
#self.projectRepository.printAllFile() #debug
for file2scan in self.projectRepository.devfile:
if file2scan.endswith("_UT.py"):
self.UTfile.append(file2scan)
print(self.projectRepository.devfilepath[file2scan]+'/'+file2scan)
def runUT(self,UTtoRun):
"""
Run a single UT
Parameters
----------
UTtoRun : String
File Name of the UT
Returns
-------
None.
"""
print(UTtoRun)
if UTtoRun in self.projectRepository.devfilepath:
UTtoRunFolderPath=os.path.realpath(os.path.join(self.projectRepository.devfilepath[UTtoRun]))
UTtoRunPath = os.path.join(UTtoRunFolderPath, UTtoRun)
print(UTtoRunPath)
#set the correct execution context & run the test
os.system(" cd " + UTtoRunFolderPath + \
" & " + pythonPath + " " + UTtoRunPath + " " + outputTestReportFolder )
def runAllUT(self):
"""
Run all the UT contained in self
The function "retrieveAllUT" sjould ahve been performed before
Returns
-------
None.
"""
for UTfile in self.UTfile:
self.runUT(UTfile)
if __name__ == "__main__":
undertest=UTTesting()
undertest.retrieveAllUT()
undertest.runAllUT()
In my project specific, I have a class that I used in other script. This might be an overkill for your usecase.
Repository.py
import os
class repository():
"""
Class that decribed folder and file in a repository
"""
def __init__(self):
"""
Initiate instance
Returns
-------
None.
"""
self.devfile = [] #List all file
self.devfilepath = {} #List all file paths
def retriveAllFilePaths(self,pathrepo):
"""
Retrive all files and their path in the class
Parameters
----------
pathrepo : Path used for the parsin
Returns
-------
None.
"""
for path, subdirs, files in os.walk(pathrepo):
for file_name in files:
self.devfile.append(file_name)
self.devfilepath[file_name] = path
def printAllFile(self):
"""
Display all file with paths
Parameters
----------
def printAllFile : TYPE
DESCRIPTION.
Returns
-------
None.
"""
for file_loop in self.devfile:
print(self.devfilepath[file_loop]+'/'+file_loop)
In your test files, you need to have a main like this:
if __name__ == "__main__":
import xmlrunner
import sys
if len(sys.argv) > 1:
outputFolder = sys.argv.pop() #avoid conflic with unittest.main
else:
outputFolder = r'test-reports'
print("Report will be created and store there: " + outputFolder)
unittest.main(testRunner=xmlrunner.XMLTestRunner(output=outputFolder))
Here is my approach by creating a wrapper to run tests from the command line:
#!/usr/bin/env python3
import os, sys, unittest, argparse, inspect, logging
if __name__ == '__main__':
# Parse arguments.
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument("-?", "--help", action="help", help="show this help message and exit" )
parser.add_argument("-v", "--verbose", action="store_true", dest="verbose", help="increase output verbosity" )
parser.add_argument("-d", "--debug", action="store_true", dest="debug", help="show debug messages" )
parser.add_argument("-h", "--host", action="store", dest="host", help="Destination host" )
parser.add_argument("-b", "--browser", action="store", dest="browser", help="Browser driver.", choices=["Firefox", "Chrome", "IE", "Opera", "PhantomJS"] )
parser.add_argument("-r", "--reports-dir", action="store", dest="dir", help="Directory to save screenshots.", default="reports")
parser.add_argument('files', nargs='*')
args = parser.parse_args()
# Load files from the arguments.
for filename in args.files:
exec(open(filename).read())
# See: http://codereview.stackexchange.com/q/88655/15346
def make_suite(tc_class):
testloader = unittest.TestLoader()
testnames = testloader.getTestCaseNames(tc_class)
suite = unittest.TestSuite()
for name in testnames:
suite.addTest(tc_class(name, cargs=args))
return suite
# Add all tests.
alltests = unittest.TestSuite()
for name, obj in inspect.getmembers(sys.modules[__name__]):
if inspect.isclass(obj) and name.startswith("FooTest"):
alltests.addTest(make_suite(obj))
# Set-up logger
verbose = bool(os.environ.get('VERBOSE', args.verbose))
debug = bool(os.environ.get('DEBUG', args.debug))
if verbose or debug:
logging.basicConfig( stream=sys.stdout )
root = logging.getLogger()
root.setLevel(logging.INFO if verbose else logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.INFO if verbose else logging.DEBUG)
ch.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(name)s: %(message)s'))
root.addHandler(ch)
else:
logging.basicConfig(stream=sys.stderr)
# Run tests.
result = unittest.TextTestRunner(verbosity=2).run(alltests)
sys.exit(not result.wasSuccessful())
For sake of simplicity, please excuse my non-PEP8 coding standards.
Then you can create BaseTest class for common components for all your tests, so each of your test would simply look like:
from BaseTest import BaseTest
class FooTestPagesBasic(BaseTest):
def test_foo(self):
driver = self.driver
driver.get(self.base_url + "/")
To run, you simply specifying tests as part of the command line arguments, e.g.:
./run_tests.py -h http://example.com/ tests/**/*.py

Categories

Resources