Installing data_files in setup.py with pip install -e - python

I'm trying to provide a bash completion script for my CLI tool that is written in Python. According to the Python Packaging Authority, data_files in setup.py is exactly what I need:
Although configuring package_data is sufficient for most needs, in some cases you may need to place data files outside of your packages. The data_files directive allows you to do that. It is mostly useful if you need to install files which are used by other programs, which may be unaware of Python packages.
So I added the completion file like this:
data_files=[
('/usr/share/bash-completion/completions', ['completion/dotenv']),
],
and try to test it with:
pip install -e .
In my virtual environment. However, the completion script gets not installed. Did I forgot something or is pip broken? The full project can be found here

I had the same issue and I have implemented a workaround.
It seems to me that python setup.py develop or (pip install -e .) does not run the same function than python setup.py install.
In fact, I have noticed by looking in the source code that python setup.py install run build_py :
https://github.com/python/cpython/blob/master/Lib/distutils/command/build_py.py#L134
https://github.com/pypa/setuptools/blob/master/setuptools/command/build_py.py
After a few digging I have opted to override the develop command as follow. The following code is python3.6:
""" SetupTool Entry Point """
import sys
from pathlib import Path
from shutil import copy2
from setuptools import find_packages, setup
from setuptools.command.develop import develop
# create os_data_files that will be used by the default install command
os_data_files = [
(
f"{sys.prefix}/config", # providing absolute path, sys.prefix will be different in venv
[
"src/my_package/config/properties.env",
],
),
]
def build_package_data():
""" implement the necessary function for develop """
for dest_dir, filenames in os_data_files:
for filename in filenames:
print(
"CUSTOM SETUP.PY (build_package_data): copy %s to %s"
% (filename, dest_dir)
)
copy2(filename, dest_dir)
def make_dirstruct():
""" Set the the logging path """
for subdir in ["config"]:
print("CUSTOM SETUP.PY (make_dirstruct): creating %s" % subdir)
(Path(BASE_DIR) / subdir).mkdir(parents=True, exist_ok=True)
class CustomDevelopCommand(develop):
""" Customized setuptools install command """
def run(self):
develop.run(self)
make_dirstruct()
build_package_data()
# provide the relevant information for stackoverflow
setup(
package_dir={"": "src"},
packages=find_packages("src"),
data_files=os_data_files,
cmdclass={"develop": CustomDevelopCommand},
)

Related

Cannot execute function in python setup.py [duplicate]

Is it possible to specify a post-install Python script file as part of the setuptools setup.py file so that a user can run the command:
python setup.py install
on a local project file archive, or
pip install <name>
for a PyPI project and the script will be run at the completion of the standard setuptools install? I am looking to perform post-install tasks that can be coded in a single Python script file (e.g. deliver a custom post-install message to the user, pull additional data files from a different remote source repository).
I came across this SO answer from several years ago that addresses the topic and it sounds as though the consensus at that time was that you need to create an install subcommand. If that is still the case, would it be possible for someone to provide an example of how to do this so that it is not necessary for the user to enter a second command to run the script?
Note: The solution below only works when installing a source distribution zip or tarball, or installing in editable mode from a source tree. It will not work when installing from a binary wheel (.whl)
This solution is more transparent:
You will make a few additions to setup.py and there is no need for an extra file.
Also you need to consider two different post-installations; one for development/editable mode and the other one for install mode.
Add these two classes that includes your post-install script to setup.py:
from setuptools import setup
from setuptools.command.develop import develop
from setuptools.command.install import install
class PostDevelopCommand(develop):
"""Post-installation for development mode."""
def run(self):
develop.run(self)
# PUT YOUR POST-INSTALL SCRIPT HERE or CALL A FUNCTION
class PostInstallCommand(install):
"""Post-installation for installation mode."""
def run(self):
install.run(self)
# PUT YOUR POST-INSTALL SCRIPT HERE or CALL A FUNCTION
and insert cmdclass argument to setup() function in setup.py:
setup(
...
cmdclass={
'develop': PostDevelopCommand,
'install': PostInstallCommand,
},
...
)
You can even call shell commands during installation, like in this example which does pre-installation preparation:
from setuptools import setup
from setuptools.command.develop import develop
from setuptools.command.install import install
from subprocess import check_call
class PreDevelopCommand(develop):
"""Pre-installation for development mode."""
def run(self):
check_call("apt-get install this-package".split())
develop.run(self)
class PreInstallCommand(install):
"""Pre-installation for installation mode."""
def run(self):
check_call("apt-get install this-package".split())
install.run(self)
setup(
...
P.S. there are no any pre-install entry points available on setuptools. Read this discussion if you are wondering why there is none.
Note: The solution below only works when installing a source distribution zip or tarball, or installing in editable mode from a source tree. It will not work when installing from a binary wheel (.whl)
This is the only strategy that has worked for me when the post-install script requires that the package dependencies have already been installed:
import atexit
from setuptools.command.install import install
def _post_install():
print('POST INSTALL')
class new_install(install):
def __init__(self, *args, **kwargs):
super(new_install, self).__init__(*args, **kwargs)
atexit.register(_post_install)
setuptools.setup(
cmdclass={'install': new_install},
Note: The solution below only works when installing a source distribution zip or tarball, or installing in editable mode from a source tree. It will not work when installing from a binary wheel (.whl)
A solution could be to include a post_setup.py in setup.py's directory. post_setup.py will contain a function which does the post-install and setup.py will only import and launch it at the appropriate time.
In setup.py:
from distutils.core import setup
from distutils.command.install_data import install_data
try:
from post_setup import main as post_install
except ImportError:
post_install = lambda: None
class my_install(install_data):
def run(self):
install_data.run(self)
post_install()
if __name__ == '__main__':
setup(
...
cmdclass={'install_data': my_install},
...
)
In post_setup.py:
def main():
"""Do here your post-install"""
pass
if __name__ == '__main__':
main()
With the common idea of launching setup.py from its directory, you will be able to import post_setup.py else it will launch an empty function.
In post_setup.py, the if __name__ == '__main__': statement allows you to manually launch post-install from command line.
Combining the answers from #Apalala, #Zulu and #mertyildiran; this worked for me in a Python 3.5 environment:
import atexit
import os
import sys
from setuptools import setup
from setuptools.command.install import install
class CustomInstall(install):
def run(self):
def _post_install():
def find_module_path():
for p in sys.path:
if os.path.isdir(p) and my_name in os.listdir(p):
return os.path.join(p, my_name)
install_path = find_module_path()
# Add your post install code here
atexit.register(_post_install)
install.run(self)
setup(
cmdclass={'install': CustomInstall},
...
This also gives you access the to the installation path of the package in install_path, to do some shell work on.
I think the easiest way to perform the post-install, and keep the requirements, is to decorate the call to setup(...):
from setup tools import setup
def _post_install(setup):
def _post_actions():
do_things()
_post_actions()
return setup
setup = _post_install(
setup(
name='NAME',
install_requires=['...
)
)
This will run setup() when declaring setup. Once done with the requirements installation, it will run the _post_install() function, which will run the inner function _post_actions().
If using atexit, there is no need to create a new cmdclass. You can simply create your atexit register right before the setup() call. It does the same thing.
Also, if you need dependencies to be installed first, this does not work with pip install since your atexit handler will be called before pip moves the packages into place.
I wasn't able to solve a problem with any presented recommendations, so here is what helped me.
You can call function, that you want to run after installation just after setup() in setup.py, like that:
from setuptools import setup
def _post_install():
<your code>
setup(...)
_post_install()

Setup.py, , setuptools, cmdclass - Custom commands not working

I am trying to create a directory upon a package installation. The function to create the directory, by itself, successfully creates it. Additionally, when I run "python3.7 setup.py install", the directory is created.
Why does this not work when using pip though? I don't see any errors. When I added print statements, I do not see them.
I have chosen to use setuptools' 'bdist_egg' function instead of the 'install' function for reasons found in here:
Running custom setuptools build during install
from sys import platform
from setuptools import setup
from os import mkdir, chmod, path
from setuptools.command.bdist_egg import bdist_egg as _bdist_egg
class OverrideInstall(_bdist_egg):
def run(self):
_bdist_egg.run(self)
# create log directory
log = "/var/log/FOO"
mode = 0o777
if not path.exists(log):
mkdir(log)
chmod(log, mode)
setup(
name='cox-nams',
version='FOO',
description='FOO',
<-- output omitted for brevity / security>
cmdclass={"bdist_egg": OverrideInstall},
)
Apparently not supported with pip install.

Pip install package prior to setup() in setup.py

I have some .proto gRPC files I want to compile as part of the setup.py script. This requires running from grpc_tools import protoc and calling protoc before setup(args). The goal is to compile and install the pb files from pip install pkgname.
E.g.
# setup.py
# generate our pb2 files in the temp directory structure
compile_protobufs(pkgname)
# this will package the generated files and put them in site-packages or .whl
setup(
name=pkgname,
install_requires=['grpcio-tools', ...],
...
)
This works as intended, I get the pb files in my site-packages or in the wheel without them having to exist in the source folder. However, this pattern means I cannot naively pip install pkgname from scratch, as the step compile_protobufs depends on grpcio-tools, which does not get installed until setup().
I could use setup_requires, but that is on the chopping block. I could just install the dependencies first (right now I use RUN pip install -r build-require.txt && pip install pkgname/ ), but it still seems like there ought to be a cleaner way.
Am I even going about this pattern correctly or am I missing some packaging idiom?
My criteria:
Generally this is run inside a container, so minimizing external deps
I want the _pb2.py files regenerated each time I pip install
These files need to also make their way into any .whl or tar.
Looks like it is already documented here:
https://github.com/grpc/grpc/tree/master/tools/distrib/python/grpcio_tools#usage
So your setup.py could look like this:
#!/usr/bin/env python3
import distutils.command.install
import setuptools
class build_package_protos(setuptools.Command):
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
from grpc_tools import command
command.build_package_protos(self.distribution.package_dir[''])
class install(distutils.command.install.install):
_sub_command = ('build_package_protos', None,)
_sub_commands = distutils.command.install.install.sub_commands
sub_commands = [_sub_command] + _sub_commands
def setup():
setuptools.setup(
# see 'setup.cfg'
cmdclass={
'build_package_protos': build_package_protos,
'install': install,
},
setup_requires=[
'grpcio-tools',
],
)
if __name__ == '__main__':
setup()

How to install data_files of python package into home directory

Here's my setup.py
setup(
name='shipane_sdk',
version='1.0.0.a5',
# ...
data_files=[(os.path.join(os.path.expanduser('~'), '.shipane_sdk', 'config'), ['config/scheduler-example.ini'])],
# ...
)
Packing & Uploading commands:
python setup.py sdist
python setup.py bdist_wheel --universal
twine upload dist/*
Installing command:
pip install shipane_sdk
But, it doesn't install the config/scheduler-example.ini under ~/.shipane_sdk
The pip documents says:
setuptools allows absolute “data_files” paths, and pip honors them as
absolute, when installing from sdist. This is not true when installing
from wheel distributions. Wheels don’t support absolute paths, and
they end up being installed relative to “site-packages”. For
discussion see wheel Issue #92.
Do you know how to do installing from sdist?
There are multiple solutions to this problem and it is all very confusing how inconsistent the packaging tools work. Some time ago I found the following workaround worked for me best with sdist (note that it doesn't work with wheels!):
Instead of using data_files, attach the files to your package using MANIFEST.in, which in your case could look like this:
include config/scheduler-example.ini
Copy the files "manually" to chosen location using this snippet in setup.py:
if 'install' in sys.argv:
from pkg_resources import Requirement, resource_filename
import os
import shutil
# retrieve the temporary path where the package has been extracted to for installation
conf_path_temp = resource_filename(Requirement.parse(APP_NAME), "conf")
# if the config directory tree doesn't exist, create it
if not os.path.exists(CONFIG_PATH):
os.makedirs(CONFIG_PATH)
# copy every file from given location to the specified ``CONFIG_PATH``
for file_name in os.listdir(conf_path_temp):
file_path_full = os.path.join(conf_path_temp, file_name)
if os.path.isfile(file_path_full):
shutil.copy(file_path_full, CONFIG_PATH)
In my case "conf" was the subdirectory in the package that contained my data files and they were supposed to be installed into CONFIG_PATH which was something like /etc/APP_NAME

Compiling & installing C executable using python's setuptools/setup.py?

I've got a python module that calls an external binary, built from C source.
The source for that external executable is part of my python module, distributed as a .tar.gz file.
Is there a way of unzipping, then compiling that external executable, and installing it using setuptools/setup.py?
What I'd like to achieve is:
installing that binary into virtual environments
manage compilation/installation of the binary using setup.py install, setup.py build etc.
making the binary part of my python module, so that it can be distributed as a wheel without external dependencies
Solved in the end by modifying setup.py to add additional handlers for commands which did the installation.
An example of a setup.py which does this might be:
import os
from setuptools import setup
from setuptools.command.install import install
import subprocess
def get_virtualenv_path():
"""Used to work out path to install compiled binaries to."""
if hasattr(sys, 'real_prefix'):
return sys.prefix
if hasattr(sys, 'base_prefix') and sys.base_prefix != sys.prefix:
return sys.prefix
if 'conda' in sys.prefix:
return sys.prefix
return None
def compile_and_install_software():
"""Used the subprocess module to compile/install the C software."""
src_path = './some_c_package/'
# compile the software
cmd = "./configure CFLAGS='-03 -w -fPIC'"
venv = get_virtualenv_path()
if venv:
cmd += ' --prefix=' + os.path.abspath(venv)
subprocess.check_call(cmd, cwd=src_path, shell=True)
# install the software (into the virtualenv bin dir if present)
subprocess.check_call('make install', cwd=src_path, shell=True)
class CustomInstall(install):
"""Custom handler for the 'install' command."""
def run(self):
compile_and_install_software()
super().run()
setup(name='foo',
# ...other settings skipped...
cmdclass={'install': CustomInstall})
Now when python setup.py install is called, the custom CustomInstall class is used, this then compiles and installs software before the normal install steps are run.
You can also do similar for any other steps you're interested in (e.g. build/develop/bdist_egg etc.).
An alternative is to make the compile_and_install_software() function a subclass of setuptools.Command, and create a fully fledged setuptools command for it.
This is more complicated, but lets you do things like specify it as a subcommand of another command (to e.g. avoid executing it twice), and to pass custom options in to it on the command line.

Categories

Resources