How can I pass a user-defined parameter both from the command line and setup.cfg configuration file to distutils' setup.py script?
I want to write a setup.py script, which accepts my package specific parameters. For example:
python setup.py install -foo myfoo
As Setuptools/Distuils are horribly documented, I had problems finding the answer to this myself. But eventually I stumbled across this example. Also, this similar question was helpful. Basically, a custom command with an option would look like:
from distutils.core import setup, Command
class InstallCommand(Command):
description = "Installs the foo."
user_options = [
('foo=', None, 'Specify the foo to bar.'),
]
def initialize_options(self):
self.foo = None
def finalize_options(self):
assert self.foo in (None, 'myFoo', 'myFoo2'), 'Invalid foo!'
def run(self):
install_all_the_things()
setup(
...,
cmdclass={
'install': InstallCommand,
}
)
Here is a very simple solution, all you have to do is filter out sys.argv and handle it yourself before you call to distutils setup(..).
Something like this:
if "--foo" in sys.argv:
do_foo_stuff()
sys.argv.remove("--foo")
...
setup(..)
The documentation on how to do this with distutils is terrible, eventually I came across this one: the hitchhikers guide to packaging, which uses sdist and its user_options.
I find the extending distutils reference not particularly helpful.
Although this looks like the "proper" way of doing it with distutils (at least the only one that I could find that is vaguely documented). I could not find anything on --with and --without switches mentioned in the other answer.
The problem with this distutils solution is that it is just way too involved for what I am looking for (which may also be the case for you).
Adding dozens of lines and subclassing sdist is just wrong for me.
Yes, it's 2015 and the documentation for adding commands and options in both setuptools and distutils is still largely missing.
After a few frustrating hours I figured out the following code for adding a custom option to the install command of setup.py:
from setuptools.command.install import install
class InstallCommand(install):
user_options = install.user_options + [
('custom_option=', None, 'Path to something')
]
def initialize_options(self):
install.initialize_options(self)
self.custom_option = None
def finalize_options(self):
#print('The custom option for install is ', self.custom_option)
install.finalize_options(self)
def run(self):
global my_custom_option
my_custom_option = self.custom_option
install.run(self) # OR: install.do_egg_install(self)
It's worth to mention that install.run() checks if it's called "natively" or had been patched:
if not self._called_from_setup(inspect.currentframe()):
orig.install.run(self)
else:
self.do_egg_install()
At this point you register your command with setup:
setup(
cmdclass={
'install': InstallCommand,
},
:
You can't really pass custom parameters to the script. However the following things are possible and could solve your problem:
optional features can be enabled using --with-featurename, standard features can be disabled using --without-featurename. [AFAIR this requires setuptools]
you can use environment variables, these however require to be set on windows whereas prefixing them works on linux/ OS X (FOO=bar python setup.py).
you can extend distutils with your own cmd_classes which can implement new features. They are also chainable, so you can use that to change variables in your script. (python setup.py foo install) will execute the foo command before it executes install.
Hope that helps somehow. Generally speaking I would suggest providing a bit more information what exactly your extra parameter should do, maybe there is a better solution available.
I successfully used a workaround to use a solution similar to totaam's suggestion. I ended up popping my extra arguments from the sys.argv list:
import sys
from distutils.core import setup
foo = 0
if '--foo' in sys.argv:
index = sys.argv.index('--foo')
sys.argv.pop(index) # Removes the '--foo'
foo = sys.argv.pop(index) # Returns the element after the '--foo'
# The foo is now ready to use for the setup
setup(...)
Some extra validation could be added to ensure the inputs are good, but this is how I did it
A quick and easy way similar to that given by totaam would be to use argparse to grab the -foo argument and leave the remaining arguments for the call to distutils.setup(). Using argparse for this would be better than iterating through sys.argv manually imho. For instance, add this at the beginning of your setup.py:
argparser = argparse.ArgumentParser(add_help=False)
argparser.add_argument('--foo', help='required foo argument', required=True)
args, unknown = argparser.parse_known_args()
sys.argv = [sys.argv[0]] + unknown
The add_help=False argument means that you can still get the regular setup.py help using -h (provided --foo is given).
Perhaps you are an unseasoned programmer like me that still struggled after reading all the answers above. Thus, you might find another example potentially helpful (and to address the comments in previous answers about entering the command line arguments):
class RunClientCommand(Command):
"""
A command class to runs the client GUI.
"""
description = "runs client gui"
# The format is (long option, short option, description).
user_options = [
('socket=', None, 'The socket of the server to connect (e.g. '127.0.0.1:8000')',
]
def initialize_options(self):
"""
Sets the default value for the server socket.
The method is responsible for setting default values for
all the options that the command supports.
Option dependencies should not be set here.
"""
self.socket = '127.0.0.1:8000'
def finalize_options(self):
"""
Overriding a required abstract method.
The method is responsible for setting and checking the
final values and option dependencies for all the options
just before the method run is executed.
In practice, this is where the values are assigned and verified.
"""
pass
def run(self):
"""
Semantically, runs 'python src/client/view.py SERVER_SOCKET' on the
command line.
"""
print(self.socket)
errno = subprocess.call([sys.executable, 'src/client/view.py ' + self.socket])
if errno != 0:
raise SystemExit("Unable to run client GUI!")
setup(
# Some other omitted details
cmdclass={
'runClient': RunClientCommand,
},
The above is tested and from some code I wrote. I have also included slightly more detailed docstrings to make things easier to understand.
As for the command line: python setup.py runClient --socket=127.0.0.1:7777. A quick double check using print statements shows that indeed the correct argument is picked up by the run method.
Other resources I found useful (more and more examples):
Custom distutils commands
https://seasonofcode.com/posts/how-to-add-custom-build-steps-and-commands-to-setuppy.html
To be fully compatible with both python setup.py install and pip install . you need to use environment variables because pip option --install-option= is bugged:
pip --install-option leaks across lines
Determine what should be done about --(install|global)-option with Wheels
pip not naming abi3 wheels correctly
This is a full example not using the --install-option:
import os
environment_variable_name = 'MY_ENVIRONMENT_VARIABLE'
environment_variable_value = os.environ.get( environment_variable_name, None )
if environment_variable_value is not None:
sys.stderr.write( "Using '%s=%s' environment variable!\n" % (
environment_variable_name, environment_variable_value ) )
setup(
name = 'packagename',
version = '1.0.0',
...
)
Then, you can run it like this on Linux:
MY_ENVIRONMENT_VARIABLE=1 pip install .
MY_ENVIRONMENT_VARIABLE=1 pip install -e .
MY_ENVIRONMENT_VARIABLE=1 python setup.py install
MY_ENVIRONMENT_VARIABLE=1 python setup.py develop
But, if you are on Windows, run it like this:
set "MY_ENVIRONMENT_VARIABLE=1" && pip install .
set "MY_ENVIRONMENT_VARIABLE=1" && pip install -e .
set "MY_ENVIRONMENT_VARIABLE=1" && python setup.py install
set "MY_ENVIRONMENT_VARIABLE=1" && python setup.py develop
References:
How to obtain arguments passed to setup.py from pip with '--install-option'?
Passing command line arguments to pip install
Passing the library path as a command line argument to setup.py
Related
I've recently re-opened a project I worked on a couple of years ago. I wrote a small python script to build the project. I would like to port that to CMake instead.
The problem I'm having is that the script uses pkg-config on linux to find the fuse headers and libraries. I'm having trouble porting this to CMake.
Here's the current python script
import subprocess, sys, os, shutil
def call( command ):
c = subprocess.Popen( command.split(), stdout=subprocess.PIPE )
c.wait()
return c.stdout.read()
class GCC:
args = None
def __init__( self, initial_args ):
self.args = initial_args
def addPKG( self, package ):
self.args.extend( package )
def addFile( self, name ):
self.args.append( name )
def compile( self, out_name ):
self.args.extend(["-o", out_name])
print " ".join( self.args )
gcc = subprocess.Popen( self.args )
return gcc.wait() == 0
if __name__ == '__main__':
cflags = call("pkg-config fuse --libs --cflags").split()
print cflags
gcc = GCC(["gcc","-g","-Wall","-pg"])
gcc.addFile("argsparse.c")
gcc.addFile("hidden.c")
#gcc.addFile("fs.c")
gcc.addFile("initialization.c")
gcc.addFile("resolve.c")
gcc.addFile("utilities.c")
gcc.addFile("winhomefs0.4.c")
gcc.addPKG(cflags)
gcc.addFile("-lulockmgr")
if gcc.compile("winhomefs") and 'install' in sys.argv:
if os.getuid() == 0:
shutil.copy("winhomefs", "/usr/local/bin/winhomefs")
Here's my current CMakeLists.txt file.
cmake_minimum_required (VERSION 2.8.11)
project (HomeFS)
set (CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH}
"${CMAKE_SOURCE_DIR}/CMakeModules/")
find_package(FUSE REQUIRED)
add_library(argsparse argsparse.c)
add_library(hidden hidden.c)
add_library(initialization initialization.c)
add_library(resolve resolve.c)
add_library(utilities utilities.c)
add_executable(homefs winhomefs0.4.c)
The issue I'm having is with the find fuse part. I've tried several different permutations of it including the following...
https://github.com/tarruda/encfs/blob/master/CMakeModules/FindFUSE.cmake
https://github.com/Pronghorn/pronghorn/blob/master/FindFUSE.cmake
Neither seem to work I get:
...argsparse.c:21:22: fatal error: fuse_opt.h: No such file or directory
#include <fuse_opt.h>
^
compilation terminated.
The python script works however which suggests there's something wrong with how cmake is configured.
For reference the pkg-config line above outputs the following on my system.
-D_FILE_OFFSET_BITS=64 -I/usr/include/fuse -lfuse -pthread
Thanks for any help!
Per Fraser's feed back I've update two things.
My CMakeLists.txt now looks like:
cmake_minimum_required (VERSION 2.8.11)
project (HomeFS)
set (CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH}
"${CMAKE_SOURCE_DIR}/CMakeModules/")
find_package(FUSE REQUIRED)
add_executable(homefs
argsparse.c
hidden.c
initialization.c
resolve.c
utilities.c
winhomefs0.4.c)
set(CMAKE_C_FLAGS "-D_FILE_OFFSET_BITS=64 -lulockmgr")
target_include_directories(homefs PRIVATE ${FUSE_INCLUDE_DIR})
target_link_libraries(homefs ${FUSE_LIBRARIES})
And all references to <fuse.h> and <fuse_opt.h> have been updated to <fuse/fuse.h> and so forth. I also had to add the flag -D_FILE_OFFSET_BITS=64 and it now compiles cleanly.
However I'm still getting a linker error.
winhomefs0.4.c:(.text+0x10b2): undefined reference to `ulockmgr_op'
collect2: error: ld returned 1 exit status
I tried adding the lib -lulockmgr to the c flags but that's not working.
Google hasn't been my friend on this there are very few references to ulockmgr do I need to implement a FindULOCKMGR CMake module, or do I need to add the line elsewhere?
Ok after some trial and error + logical thought I solved the issue I needed to move the -lulockmgr string from CFLAGS to the target_link_libraries line.
You're probably just missing a couple of calls in your CMakeLists.txt.
The line find_package(FUSE REQUIRED) will try and find the path to the FUSE headers and to the FUSE lib(s). The comment blocks at the top of the two FindFUSE.cmake files provide details of what variables each sets. Take the encfs one for example. It will set FUSE_FOUND to true or false, allowing you to exit your script with a helpful error message if FUSE isn't found.
The variable FUSE_INCLUDE_DIR will be set to the absolute path of the folder containing the FUSE header. FUSE_LIBRARIES will be set to a list of absoulte paths to the FUSE libs.
What's currently missing from your CMakeLists.txt is to use these variables.
You would use them in calls to target_include_directories and target_link_libraries - e.g.
target_include_directories(homefs PRIVATE ${FUSE_INCLUDE_DIR})
target_link_libraries(homefs ${FUSE_LIBRARIES})
Another issue is that you're creating five separate libraries with your five add_library calls, but then not using them. At the very least I'd have expected to see these also being linked to the exe via a target_link_libraries call.
I don't know Python well enough to know what the original script is doing, but I think the more likely solution is that these should all just be part of the exe:
add_library(argsparse argsparse.c)
add_library(hidden hidden.c)
add_library(initialization initialization.c)
add_library(resolve resolve.c)
add_library(utilities utilities.c)
add_executable(homefs
argsparse.c
hidden.c
initialization.c
resolve.c
utilities.c
winhomefs0.4.c)
I discoverd entry_points of setuptools:
http://pythonhosted.org/setuptools/setuptools.html#dynamic-discovery-of-services-and-plugins
quote: setuptools supports creating libraries that “plug in” to extensible applications and frameworks, by letting you register “entry points” in your project that can be imported by the application or framework.
But I have not seen a project using them.
Are there examples of projects which use them?
If not, why are they not used?
There are loads of examples. Any project that defines console scripts uses them, for example. A quick search on GitHub gives you plenty to browse through.
I'll focus on one specific example (one that is not on GitHub): Babel.
Babel uses both entry_points for both console scripts and to define extension points for translatable text extraction. See their setup.py source:
if have_setuptools:
extra_arguments = dict(
zip_safe = False,
test_suite = 'babel.tests.suite',
tests_require = ['pytz'],
entry_points = """
[console_scripts]
pybabel = babel.messages.frontend:main
[distutils.commands]
compile_catalog = babel.messages.frontend:compile_catalog
extract_messages = babel.messages.frontend:extract_messages
init_catalog = babel.messages.frontend:init_catalog
update_catalog = babel.messages.frontend:update_catalog
[distutils.setup_keywords]
message_extractors = babel.messages.frontend:check_message_extractors
[babel.checkers]
num_plurals = babel.messages.checkers:num_plurals
python_format = babel.messages.checkers:python_format
[babel.extractors]
ignore = babel.messages.extract:extract_nothing
python = babel.messages.extract:extract_python
javascript = babel.messages.extract:extract_javascript
""",
)
Tools like pip and zc.buildout use the console_scripts entry point to create commandline scripts (one called pybabel, running the main() callable in the babel.messages.frontend module).
The distutils.commands entry points defines additional commands you can use when running setup.py; these can be used in your own projects to invoke Babel command-line utilities right from your setup script.
Last, but not least, it registers its own checkers and extractors. The babel.extractors entry point is loaded by the babel.messages.extract.extract function, using the setuptools pkg_resources module, giving access to all installed Python projects that registered that entry point. The following code looks for a specific extractor in those entries:
try:
from pkg_resources import working_set
except ImportError:
pass
else:
for entry_point in working_set.iter_entry_points(GROUP_NAME,
method):
func = entry_point.load(require=True)
break
This lets any project register additional extractors; simply add an entry point in your setup.py and Babel can make use of it.
Sentry is a good example. Sentry's author even created a django package named Logan to convert standard django management commands to console scripts.
I write a module that wraps functionality of an external binary.
For example, I wrap ls program into a python module my_wrapper.py
import my_wrapper
print my_wrapper.ls('some_directory/')
# list files in some_directory
and in my_wrapper.py I do:
# my_wrapper.py
PATH_TO_LS = '/bin/ls'
def ls(path):
proc = subprocess.Popen([PATH_TO_LS, path], ...)
...
return paths
(of course, I do not wrap ls but some other binary)
The binary might be installed with an arbitrary location, like /usr/bin/, /opt/ or even at the same place as the python script (./binaries/)
Question:
What would be the cleanest (from the user perspective) way to set the path to the binary?
Should the user specify my_wrapper.PATH_TO_LS = ... or invoke some my_wrapper.set_binary_path(path) at the beginning of his script?
Maybe it would be better to specify it in env, and the wrapper would find it with os.environ?
If the wrapper is distributed as egg, can I require during the installation, that the executable is already present in the system (see below)?
egg example:
# setup.py
setup(
name='my_wrapper',
requires_binaries=['the_binary'] # <--- require that the binary is already
# installed and on visible
# on execution path
)
or
easy_install my_wrapper BINARY_PATH=/usr/local/bin/the_binary
Create a "configuration object" with sane defaults. Allow the consumer to modify the values as appropriate. Accept a configuration object instance to your functions, taking the one you created by default.
I am developing a product for Plone 4, inside the zeocluster/src/... directory of an installation, and I have an automated test. Unfortunately, when I run 'bin/client1 shell' and then (path to Plone's Python)/bin/python setup.py test, it fails. The error is
File "buildout-cache/eggs/Products.PloneTestCase-0.9.12-py2.6.egg/Products/PloneTestCase/PloneTestCase.py", line 109, in getPortal
return getattr(self.app, portal_name)
AttributeError: plone
What is the correct way to run automated tests in Plone 4?
In setup.py,
...
test_suite = "nose.collector"
...
The failing test:
import unittest
from Products.PloneTestCase import PloneTestCase as ptc
ptc.setupPloneSite()
class NullTest(ptc.PloneTestCase):
def testTest(self):
pass
def test_suite():
return unittest.TestSuite([
unittest.makeSuite(NullTest)
])
if __name__ == '__main__':
unittest.main(defaultTest='test_suite')
Best is to edit your buildout.cfg and add a part that creates a 'bin/test' script. Something like this:
[test]
recipe = zc.recipe.testrunner
# Note that only tests for packages that are explicitly named (instead
# of 'implicitly' added to the instance as dependency) can be found.
eggs =
# Use the name of the plone.recipe.zope2instance part here, might be zeoclient instead:
${instance:eggs}
defaults = ['--exit-with-status', '--auto-color', '--auto-progress']
Do not forget to add 'test' to the 'parts' in the main 'buildout' section of your buildout.cfg. Run bin/buildout and you should now have a bin/test script. See the PyPI page of this recipe for more options and explanation.
Now running 'bin/test' should run all tests for all eggs explicitly named in the instance part. This may run far too many tests. Use 'bin/test -s your.package' to run only the tests for your.package, provided your.package is part of the eggs in the instance.
Note that instead of the 'pass' that you now have in the test, it is better to add a test that you know for certain will fail, like 'self.assertEqual(True, False)'. Then it is easier to see that your test indeed has been run and that it fails as expected.
When I have a simple buildout for testing one specific package that I am developing, I usually extend one of the configs in the plonetest buildout, like this one for Plone 4; you can have a look at that for inspiration.
You need to use zope.testrunner and zope.testing to run your tests. Plone tests cannot be run via nose and we don't support the 'test_suite' argument to setup.py as invented by setuptools.
The other answers explain how to get a test runner script set up.
ptc.setupPloneSite() registers a deferred function that will be actually run when the zope.testrunner layer is set up. I'm guessing you're not using zope.testrunner and thus the layer isn't being setup so the Plone site is never created, hence the AttributeError when it tries subsequently to get the portal object.
I would like to be able to run a nose test script which accepts command line arguments. For example, something along the lines:
test.py
import nose, sys
def test():
# do something with the command line arguments
print sys.argv
if __name__ == '__main__':
nose.runmodule()
However, whenever I run this with a command line argument, I get an error:
$ python test.py arg
E
======================================================================
ERROR: Failure: ImportError (No module named arg)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py", line 368, in loadTestsFromName
module = resolve_name(addr.module)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/util.py", line 334, in resolve_name
module = __import__('.'.join(parts_copy))
ImportError: No module named arg
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Apparently, nose tries to do something with the arguments passed in sys.argv. Is there a way to make nose ignore those arguments?
Alright, I hate "why would you want to do that?" answers just as much as anyone, but I'm going to have to make one here. I hope you don't mind.
I'd argue that doing whatever you're wanting to do isn't within the scope of the framework nose. Nose is intended for automated tests. If you have to pass in command-line arguments for the test to pass, then it isn't automated. Now, what you can do is something like this:
import sys
class test_something(object):
def setUp(self):
sys.argv[1] = 'arg'
del sys.argv[2] # remember that -s is in sys.argv[2], see below
def test_method(self):
print sys.argv
If you run that, you get this output:
[~] nosetests test_something.py -s
['/usr/local/bin/nosetests', 'arg']
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
(Remember to pass in the -s flag if you want to see what goes on stdout)
However, I'd probably still recommend against that, as it's generally a bad idea to mess with global state in automated tests if you can avoid it. What I would likely do is adapt whatever code I'm wanting to test to take an argv list. Then, you can pass in whatever you want during testing and pass in sys.argv in production.
UPDATE:
The reason why I need to do it is
because I am testing multiple
implementations of the same library.
To test those implementations are
correct I use a single nose script,
that accepts as a command line
argument the library that it should
import for testing.
It sounds like you may want to try your hand at writing a nose plugin. It's pretty easy to do. Here are the latest docs.
You could use another means of getting stuff into your code:
import os
print os.getenv('KEY_THAT_MIGHT_EXIST', default_value)
Then just remember to set your environment before running nose.
I think that is a perfectly acceptable scenario. I also needed to do something similar in order to run the tests against different scenarios (dev, qa, prod, etc) and there I needed the right URLS and configurations for each environment.
The solution I found was to use the nose-testconfig plugin (link here). It is not exactly passing command line arguments, but creating a config file with all your parameters, and then passing this config file as argument when you execute your nose-tests.
The config file has the following format:
[group1]
env=qa
[urlConfig]
address=http://something
[dbConfig]
user=test
pass=test
And you can read the arguments using:
from testconfig import config
print(config['dbConfig']['user'])
For now I am using the following hack:
args = sys.argv[1:]
sys.argv = sys.argv[0:1]
which just reads the argument into a local variable, and then deletes all the additional arguments in sys.argv so that nose does not get confused by them.
Just running nose and passing in parameters will not work as nose will attempt to interpret the arguments as nose parameters so you get the problems you are seeing.
I do not think nose support parameter passing directly yet but this nose plug-in nose-testconfig Allows you to write tests like below:
from testconfig import config
def test_os_specific_code():
os_name = config['os']['type']
if os_name == 'nt':
pass # some nt specific tests
else:
pass # tests for any other os