I want to unit test some Python extensions. To achieve this I'm running setup() in a script:
from distutils.core import setup, Extension
import os
DIR = os.path.dirname(__file__)
def call_setup():
module1 = Extension('callbacks',
sources = [os.path.join(DIR, 'callbacks.c')])
setup(
script_name = 'setup.py',
script_args = ['build'],
name = 'PackageName',
ext_modules = [module1])
To avoid leaving junk in the test directory I want to cleanup the build after the tests run. I'd like to run distutils.command.clean.clean() in the unittest tearDown(). How do I get the dist object for the distro that must be passed as an argument to clean()?
Thanks
It looks like your call to setup() should return a Distribution instance.
See the setup() function for a list of keyword arguments accepted by the Distribution constructor. setup() creates a Distribution instance.
Related
I'm trying to incorporate a c++ extension as a submodule into an existing python library via cmake. Building the C++ extension works fine and importing it as a python module works, but not as the submodule of the header library.
I have the following directory structure:
frontend/
foo.py
bar.py
backend/
backend.cpp
The extension is bound to a python module via pybind:
PYBIND11_MODULE(backend, m)
{
m.doc() = "backend c++ implementation"; // optional module docstring
m.def("method", &method, "The method I want to call from python.");
}
In the CMakeLists.txt, the relevant line is:
pybind11_add_module(backend "frontend/backend/backend.cpp")
I've followed the instructions form here and here to write the setup.py script. I guess the most important lines look like this:
from setuptools import setup, Extension, find_packages
from setuptools.command.build_ext import build_ext
from setuptools.command.test import test as TestCommand
class CMakeExtension(Extension):
def __init__(self, name, sourcedir=".", sources=[]):
Extension.__init__(self, name, sources=[])
class CMakeBuild(build_ext):
def run(self):
build_directory = os.path.abspath(self.build_temp)
if not os.path.exists(self.build_temp):
os.makedirs(self.build_temp)
cmake_list_dir = os.path.abspath(os.path.dirname(__file__))
print("-" * 10, "Running CMake prepare", "-" * 40)
subprocess.check_call(
["cmake", cmake_list_dir], cwd=self.build_temp,
)
print("-" * 10, "Building extensions", "-" * 40)
cmake_cmd = ["cmake", "--build", "."] + self.build_args
subprocess.check_call(cmake_cmd, cwd=self.build_temp)
# Move from build temp to final position
for ext in self.extensions:
self.move_output(ext)
def move_output(self, ext):
build_temp = Path(self.build_temp).resolve()
dest_path = Path(self.get_ext_fullpath(ext.name)).resolve()
source_path = build_temp / self.get_ext_filename(ext.name)
dest_directory = dest_path.parents[0]
dest_directory.mkdir(parents=True, exist_ok=True)
self.copy_file(source_path, dest_path)
extensions = [CMakeExtension("backend")]
setup(
name="frontend",
packages=["frontend"],
ext_modules=extensions,
cmdclass=dict(build_ext=CMakeBuild),
)
But this does not make backend a submodule of frontend, but instead a module on its own. So this works:
from backend import method
But to avoid naming issues with other libraries, what I would like to have is this:
from frontend.backend import method
Changing the naming in the pybinding or in the extension call to extensions = [CMakeExtension("frontend.backend")] does unfortunately not resolve my problem, the setup does not find the backend.<platform>.so shared library then, because it looks for frontend/backend.<platform>.so, which does not exist. How could I resolve this issue?
I think I've resolved the issue with the following lines:
Change in the setup.py file:
ext_modules = [
Extension(
"frontend.backend", sources=["frontend/backend/backend.cpp"]
)
]
Change in the CMakeLists.txt file:
pybind11_add_module(backend "frontend/backend/backend.cpp")
set_target_properties( backend
PROPERTIES
ARCHIVE_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/frontend"
LIBRARY_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/frontend"
RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/frontend"
)
The shared library object backend.platform.so must be located in the frontend directory. Neither the pybind module name nor the sourcefile .cpp should contain any "." in the names, because the get_ext_fullpath() method from build_ext will split by dots. Only the frontend directory containts an init.py file.
I am setting up a package using distutils.
I need to allow access to a module that is built during the set-up process and is located in ./build/temp.linux-x86_64-3.6. I do this by including the
include_dirs=["./build/temp.linux-x86_64-3.6"]
when adding the extension to the distutils Configuration.
My question is there a way of setting this using a wildcard such as:
include_dirs=["./build/temp.linux*"]
as when I try this it fails, citing error:
Nonexistent include directory ‘build/temp.linux*’ [-Wmissing-include-dirs]
The reason I want this is that the build folder will be named differently depending on the system. Alternatively if anyone knows a way of figuring out what this temp build folder will be called that would also work.
The way I have got around this problem is as follows:
def return_major_minor_python():
import sys
return str(sys.version_info[0])+"."+str(sys.version_info[1])
def return_include_dir():
from distutils.util import get_platform
return get_platform()+'-'+return_major_minor_python()
Then when calling config.add_extension() using:
include_dirs=['build/temp.' + return_include_dir()]
So the whole process for adding a f90wrapped, f2py extension to a python package is:
def setup_fort_ext(args,parent_package='',top_path=''):
from numpy.distutils.misc_util import Configuration
from os.path import join
import sys
config = Configuration('',parent_package,top_path)
fort_src = [join('PackageName/','fortran_source.f90')]
config.add_library('_fortran_source', sources=fort_src,
extra_f90_compile_args = [ args["compile_args"]],
extra_link_args=[args["link_args"]])
sources = [join('PackageName','f90wrap_fortran_source.f90')]
config.add_extension(name='_fortran_source',
sources=sources,
extra_f90_compile_args = [ args["compile_args"]],
extra_link_args=[args["link_args"]],
libraries=['_tort'],
include_dirs=['build/temp.' + return_include_dir()])
return config
if __name__ == '__main__':
import sys
import subprocess
import os
install_numpy() #installs numpy
install_dependencies() #calls to pip to install any requirements
from numpy.distutils.core import setup
config = {'name':'PackageName',
'version':__version__,
'project_description':'Some Package description',
'description':'Some package Description',
'long_description': open('README.txt').read(),
'long_description_content_type':'text/markdown',
'author':'Your name here',
'author_email':'your email here',
'url':'link to git repo here',
'python_requires':'>=3.3',
'packages':['PackageName'],
'package_dir':{'PackageName':'PackageName'},
'package_data':{'PackageName':['*so*']},
'name': 'PackageName'
}
config_fort = setup_fort_ext(args,parent_package='PackageName',top_path='')
config2 = dict(config,**config_fort.todict())
setup(**config2)
where the source fortran_source.f90 is wrapped beforehand and the resulting wrapped source file (f90wrap_fortran_source.f90) is included as a library, and subsequently compiled by f2py.
args in the above is just a dict with the any linking or compile args you wish to pass through.
Below is my setup file to setup a python wrapper. The issue I am having is that in my c code I am writing is making calls to clock_gettime for profiling reasons. The thing is when I try to import the module I get the following: error undefined symbol: clock_gettime. I understand that I need to compile with -lrt, but obviously my compiler is not getting that flag. What am I doing wrong?
from distutils.core import setup, Extension
import os
module1 = Extension('relaymod',
extra_compile_args = ["-lrt"], #flag so compiler links to realtime lib
sources=['relaymodule.c']
)
setup (name = 'relaymod',
version = '1.0',
description = "CTec Relay Board controller",
author='Richard Kelly',
url='site',
ext_modules=[module1])
EDIT:
looking at the distutils.core documentation I believe I need to set extra_link_args Below is my new change, but I am now getting this error: NameError: name 'extra_link_args' is not defined
EDIT2: ok the code below is now working. Had a few things going on. after I removed the build folder and rebuilt this worked.
from distutils.core import setup, Extension
import os
module1 = Extension('relaymod',
extra_link_args=["-lrt"],
sources=['relaymodule.c']
)
setup (name = 'relaymod',
version = '1.0',
description = "CTec Relay Board controller",
author='Richard Kelly',
url='site',
ext_modules=[module1])
you are missing the equal (=) you need to say extra_link_args=[your list of link args]
Updated per the comments:
delete the build folder before retrying
I'd like to build a static Cython library using distutils. I don't care about having it be a true Python extension module that can be import'ed. I just want to compile the code and put the objects in a static library. The code to create a dynamic library is very simple,
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext':build_ext},
ext_modules = [Extension("test",["test.pyx"])]
)
Is there a simple way to make it static instead?
Distutils is very limited and not set up for static builds. I would advise you to use something else to compile the static library part of your project.
If your use case is to call into Cython code from other C code, then you want to use the public or api declarations along with your cdef declared functions and variables in your Cython code. Cython will allow the so-declared objects to be called from external C code, and it will generate a .h file alongside the .c file for you.
Fyi, this works using numpy distutils, but obviously is nowhere near the simplicity or probably portability of the original code for a shared library,
from Cython.Compiler.Main import compile
from numpy.distutils.misc_util import Configuration
compile('test.pyx')
config = Configuration(...)
config.add_installed_library('test',
['test.c'],
'test',
{'include_dirs':[get_python_inc()]})
Assuming that you have a sources, include_dirs and build_dir in your setup.py, this is how you can build a static library
from distutils.ccompiler import new_compiler
from sysconfig import get_paths
import os
project_name = "slimey_project"
source = ['source1.c']
include_dirs = ['include']
build_dir = os.path.join(os.path.dirname(__file__), 'build')
class StaticLib(Command):
description = 'build static lib'
user_options = [] # do not remove, needs to be stubbed out!
python_info = get_paths()
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
# Create compiler with default options
c = new_compiler()
# Optionally add include directories etc.
for d in include_dirs:
c.add_include_dir(d)
c.add_include_dir(self.python_info['include'])
# Compile into .o files
objects = c.compile(sources)
# Create static or shared library
c.create_static_lib(objects, project_name, output_dir=build_dir)
Source: https://gist.github.com/udnaan/d549950a33fd82d13f9e6ba4aae82964
I'm trying to figure out how to get python setup.py test to run the equivalent of python -m unittest discover. I don't want to use a run_tests.py script and I don't want to use any external test tools (like nose or py.test). It's OK if the solution only works on python 2.7.
In setup.py, I think I need to add something to the test_suite and/or test_loader fields in config, but I can't seem to find a combination that works correctly:
config = {
'name': name,
'version': version,
'url': url,
'test_suite': '???',
'test_loader': '???',
}
Is this possible using only unittest built into python 2.7?
FYI, my project structure looks like this:
project/
package/
__init__.py
module.py
tests/
__init__.py
test_module.py
run_tests.py <- I want to delete this
setup.py
Update: This is possible with unittest2 but I want find something equivalent using only unittest
From https://pypi.python.org/pypi/unittest2
unittest2 includes a very basic setuptools compatible test collector. Specify test_suite = 'unittest2.collector' in your setup.py. This starts test discovery with the default parameters from the directory containing setup.py, so it is perhaps most useful as an example (see unittest2/collector.py).
For now, I'm just using a script called run_tests.py, but I'm hoping I can get rid of this by moving to a solution that only uses python setup.py test.
Here's the run_tests.py I'm hoping to remove:
import unittest
if __name__ == '__main__':
# use the default shared TestLoader instance
test_loader = unittest.defaultTestLoader
# use the basic test runner that outputs to sys.stderr
test_runner = unittest.TextTestRunner()
# automatically discover all tests in the current dir of the form test*.py
# NOTE: only works for python 2.7 and later
test_suite = test_loader.discover('.')
# run the test suite
test_runner.run(test_suite)
If you use py27+ or py32+, the solution is pretty simple:
test_suite="tests",
From Building and Distributing Packages with Setuptools (emphasis mine):
test_suite
A string naming a unittest.TestCase subclass (or a package or module
containing one or more of them, or a method of such a subclass), or naming
a function that can be called with no arguments and returns a unittest.TestSuite.
Hence, in setup.py you would add a function that returns a TestSuite:
import unittest
def my_test_suite():
test_loader = unittest.TestLoader()
test_suite = test_loader.discover('tests', pattern='test_*.py')
return test_suite
Then, you would specify the command setup as follows:
setup(
...
test_suite='setup.my_test_suite',
...
)
You don't need config to get this working. There are basically two main ways to do it:
The quick way
Rename your test_module.py to module_test.py (basically add _test as a suffix to tests for a particular module), and python will find it automatically. Just make sure to add this to setup.py:
from setuptools import setup, find_packages
setup(
...
test_suite = 'tests',
...
)
The long way
Here's how to do it with your current directory structure:
project/
package/
__init__.py
module.py
tests/
__init__.py
test_module.py
run_tests.py <- I want to delete this
setup.py
Under tests/__init__.py, you want to import the unittest and your unit test script test_module, and then create a function to run the tests. In tests/__init__.py, type in something like this:
import unittest
import test_module
def my_module_suite():
loader = unittest.TestLoader()
suite = loader.loadTestsFromModule(test_module)
return suite
The TestLoader class has other functions besides loadTestsFromModule. You can run dir(unittest.TestLoader) to see the other ones, but this one is the simplest to use.
Since your directory structure is such, you'll probably want the test_module to be able to import your module script. You might have already done this, but just in case you didn't, you could include the parent path so that you can import the package module and the module script. At the top of your test_module.py, type:
import os, sys
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import unittest
import package.module
...
Then finally, in setup.py, include the tests module and run the command you created, my_module_suite:
from setuptools import setup, find_packages
setup(
...
test_suite = 'tests.my_module_suite',
...
)
Then you just run python setup.py test.
Here is a sample someone made as a reference.
One possible solution is to simply extend the test command for distutilsand setuptools/distribute. This seems like a total kluge and way more complicated than I would prefer, but seems to correctly discover and run all the tests in my package upon running python setup.py test. I'm holding off on selecting this as the answer to my question in hopes that someone will provide a more elegant solution :)
(Inspired by https://docs.pytest.org/en/latest/goodpractices.html#integrating-with-setuptools-python-setup-py-test-pytest-runner)
Example setup.py:
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
def discover_and_run_tests():
import os
import sys
import unittest
# get setup.py directory
setup_file = sys.modules['__main__'].__file__
setup_dir = os.path.abspath(os.path.dirname(setup_file))
# use the default shared TestLoader instance
test_loader = unittest.defaultTestLoader
# use the basic test runner that outputs to sys.stderr
test_runner = unittest.TextTestRunner()
# automatically discover all tests
# NOTE: only works for python 2.7 and later
test_suite = test_loader.discover(setup_dir)
# run the test suite
test_runner.run(test_suite)
try:
from setuptools.command.test import test
class DiscoverTest(test):
def finalize_options(self):
test.finalize_options(self)
self.test_args = []
self.test_suite = True
def run_tests(self):
discover_and_run_tests()
except ImportError:
from distutils.core import Command
class DiscoverTest(Command):
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
discover_and_run_tests()
config = {
'name': 'name',
'version': 'version',
'url': 'http://example.com',
'cmdclass': {'test': DiscoverTest},
}
setup(**config)
Another less than ideal solution slightly inspired by http://hg.python.org/unittest2/file/2b6411b9a838/unittest2/collector.py
Add a module that returns a TestSuite of discovered tests. Then configure setup to call that module.
project/
package/
__init__.py
module.py
tests/
__init__.py
test_module.py
discover_tests.py
setup.py
Here's discover_tests.py:
import os
import sys
import unittest
def additional_tests():
setup_file = sys.modules['__main__'].__file__
setup_dir = os.path.abspath(os.path.dirname(setup_file))
return unittest.defaultTestLoader.discover(setup_dir)
And here's setup.py:
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
config = {
'name': 'name',
'version': 'version',
'url': 'http://example.com',
'test_suite': 'discover_tests',
}
setup(**config)
Python's standard library unittest module supports discovery (in Python 2.7 and later, and Python 3.2 and later). If you can assume those minimum versions, then you can just add the discover command line argument to the unittest command.
Only a small tweak is needed to setup.py:
import setuptools.command.test
from setuptools import (find_packages, setup)
class TestCommand(setuptools.command.test.test):
""" Setuptools test command explicitly using test discovery. """
def _test_args(self):
yield 'discover'
for arg in super(TestCommand, self)._test_args():
yield arg
setup(
...
cmdclass={
'test': TestCommand,
},
)
This won't remove run_tests.py, but will make it work with setuptools. Add:
class Loader(unittest.TestLoader):
def loadTestsFromNames(self, names, _=None):
return self.discover(names[0])
Then in setup.py: (I assume you're doing something like setup(**config))
config = {
...
'test_loader': 'run_tests:Loader',
'test_suite': '.', # your start_dir for discover()
}
The only downside I see is it's bending the semantics of loadTestsFromNames, but the setuptools test command is the only consumer, and calls it in a specified way.