I'm working on a program which a few other people are going to be using, made in Python. They all have python installed, however, there are several libraries that this program imports from.
I would ideally like to be able to simply send them the Python files and them be able to just run it, instead of having to tell them each of the libraries they have to install and as them to get each one manually using pip.
Is there any way I can include all the libraries that my project uses with my Python files (or perhaps set up an installer that installs it for them)? Or do I just have to give them a full list of python libraries they must install and get them to do each one manually?
That's the topic of Python packaging tutorial.
In brief, you pass them as install_requires parameter to setuptools.setup().
You can then generate a package in many formats including wheel, egg, and even a Windows installer package.
Using the standard packaging infrastructure will also give you the benefit of easy version management.
You can build a package and specify dependencies in your setup.py. This is the right way
http://python-packaging.readthedocs.io/en/latest/dependencies.html
from setuptools import setup
setup(name='funniest',
version='0.1',
description='The funniest joke in the world',
url='http://github.com/storborg/funniest',
author='Flying Circus',
author_email='flyingcircus#example.com',
license='MIT',
packages=['funniest'],
install_requires=[
'markdown',
],
zip_safe=False)
The need it now hack option is some lib that lets your script interact with shell. pexpect is my favorite for automating shell interactions.
https://pexpect.readthedocs.io/en/stable/
You should use virtualenv to manage packages of your project. If you do, you can then pip freeze > requirements.txt to save all dependencies of a project. After this requirements.txt will contain all necessary requirements to run your app. Add this file to your repository. All packages from requirements.txt can be install with
pip install -r requirements.txt
The other option will be to create a PyPI package. You can find a tutorial on this topic here
Related
How do I actually create a release/distro of a python package that uses a git repo tag for the versioning, using setuptools and pbr?
There is plenty of information on the basic setup and configuration required:
SetupTools Documentation - setup() and setup.py configuration
Python Packaging User Guide - Installing Packages
PBR v3.1.1 documentation
StackOverflow: How to use version info generated using setuptools and pbr
But where is the simple info on how to actually create the distro?
i.e. I'm looking for whatever command finds the git tag with the version info and pulls it into the configuration info, so the source with that new version info can be distributed, and the version info is discoverable from the scripts, using a method like described in this answer.
Additional details
I'm working on a project that will be distributed to other developers only through a git repo, not through PyPi. The project will be released to users as an executable using pyinstaller, so this package distribution will only serve a few key purposes:
Install/Setup the package for other developers so that dependencies/environment can be recreated cleanly.
Manage versioning - Current plan is to use pbr to generate versions from the Git repo tags, so those tags can be our source of truth for versioning
Use pbr for other auto generation of mundane items from Git, such as authors, manifest.in file, release notes, etc.
Since setuptools docs focus on setting up a fully distributable and reusable package with PyPi and pip, and pbr docs only really tell you how to modify setuptools configuration to use pbr, I can't find the info on how to just run the distribution/release process.
I'm sure it exists somewhere in the documentation, but after several false starts I'm asking here. It is implied everywhere I look that everyone either knows how to do this or it just magically happens as a part of the process.
Am I just missing the obvious?
Update:
Based on sinoroc's answer, it appears I need to look into development mode installs. i.e. Anyone developing the project will clone from git, and then install via using setuptools development install mode.
This wasn't directly a part of the original question, but implied, and I believe will be of interest to people in the same situation (info I couldn't easily find).
More info is available in his answer on updating some of the metadata, and via this setuptools documentation link to working in "Development Mode"
In short:
python3 setup.py sdist
python3 setup.py bdist_wheel
How do I actually create a release/distro of a python package that uses a git repo tag for the versioning, using setuptools and pbr?
The usual commands to create (source and wheel) distributions of your Python package with setuptools are: python3 setup.py sdist and python3 setup.py bdist_wheel. The distributions can then be found in the dist directory by default.
Since setuptools docs focus on setting up a fully distributable and reusable package with PyPi and pip, and pbr docs only really tell you how to modify setuptools configuration to use pbr, I can't find the info on how to just run the distribution/release process.
It is true that setuptools does not document this. It only documents the differences to distutils, and it is confusing indeed. See below for actual documentation...
But where is the simple info on how to actually create the distro?
https://packaging.python.org/tutorials/packaging-projects/#generating-distribution-archives
https://docs.python.org/3/distutils/sourcedist.html
https://docs.python.org/3/distutils/builtdist.html
Update
Since you don't plan on publishing distributions of your project on an index such as PyPI, and you plan on using pyinstaller instead, then you can indeed most likely disregard the setuptools commands such as sdist and bdist_wheel.
Still you might want to know these commands for the development phase:
Use commands such as python3 setup.py --version, python3 setup.py --fullname to figure out if setuptools (and in your case pbr) is catching the right info.
Use python3 setup.py develop (or pip install --editable .) to place a pseudo link (egg-link) in your site-packages that points at your work in progress. This way your changes are always installed and importable. Important: don't use python3 setup.py install, this would copy the current version to site-packages and newer changes would not be importable.
Now I don't know how all this will work once you move on to pyinstaller. Especially since you mentioned that you want the meta info (such as the version number) to be discoverable from within your scripts. The technique with setuptools pkg_resources may or may not work in the pyinstaller context.
This is how I solved the same issue, also having read several different links.
I have created a setup.py file with this content:
from setuptools import setup, find_packages
def readme():
with open('README.rst') as f:
return f.read()
def read_other_requirements(other_type):
with open(other_type+'-requirements.txt') as f:
return f.read()
setup(
setup_requires=read_other_requirements('setup'),
pbr=True,
packages=find_packages('src'),
package_dir={'': 'src'},
include_package_data=True,
zip_safe=True
)
I have the source code in ./src. Also, I have a setup-requirements.txt, with content:
pip==18.1
pbr==5.1.1
setuptools==40.7.0
And a setup.cfg with this content:
[metadata]
name = XXXXX
description = XXXXX
description-file = README.rst
home-page = https://github.com/XXXXX/XXXXX
So first, you install the setup-requirements:
pip install -r setup-requirements.txt
Then, whenever you have locally a commit which was tagged in GitHub, you can install it using:
python setup.py install
and it will be installed with the tagged version.
You can check it by doing:
python setup.py --version
I'm trying to maintain dependencies using pip install -r requirements.txt. However, some of required packages do not support Python 3 directly, but can be converted manually using 2to3.
Is there a way to force pip to run 2to3 on those packages automagically when doing pip install -r requirements.txt?
No, it needs to be part of the package setup configuration instead. See Supporting both Python 2 and 3 with Distribute.
You add metadata to your package installer:
setup(
name='your.module',
version = '1.0',
description='This is your awesome module',
author='You',
author_email='your#email',
package_dir = {'': 'src'},
packages = ['your', 'your.module'],
test_suite = 'your.module.tests',
use_2to3 = True,
convert_2to3_doctests = ['src/your/module/README.txt'],
use_2to3_fixers = ['your.fixers'],
use_2to3_exclude_fixers = ['lib2to3.fixes.fix_import'],
)
Such a package would then automatically run 2to3 on installation into a Python 3 system.
2to3 is a tool, not a magic bullet, you cannot apply it to an arbitrary package pip downloads from PyPI. The package needs to support it in the way it is coded. Thus, running it automatically from pip is not going to work; the responsibility lies with the package maintainer.
Note that just because 2to3 runs successfully on a package, it does not necessarily follow the package will work in Python 3. Assumptions about bytes vs. unicode usually crop up when you actually start using the package.
Contact the maintainers of the packages you are interested in and ask what the status is for that package for Python 3. Supplying patches to them usually helps. If such requests and offers for help fall on deaf ears, for Open Source packages you can always fork them and apply the necessary changes yourself.
I'm working on creating a Python package that is somewhat modular in nature and I was wondering what's the best way to go about handling multiple installation configurations?
Take the following setup.py for the package of a simple document parser:
setup(
name = 'my_document_parser',
...
entry_points = {
'my_document_parser.parsers': [
'markdown = my_document_parser.parsers.misaka:Parser [misaka]',
'textile = my_document_parser.parsers.textile:Parser [textile]'
]
},
install_requires = [
'some_required_package'
],
extras_require = {
'misaka' = ['misaka'],
'textile' = ['textile']
},
...
)
I'd like to make something like the following available:
A default install
python setup.py install or pip install my_document_parser
installs my_document_parser, some_required_package, misaka
A bare install
something like python setup.py install --bare
installs my_document_parser, some_required_package
A way to enable other dependencies
something like python setup.py install --bare --with-textile
installs my_document_parser, some_required_package, textile
This is my first time messing around with Python packages from a developer standpoint, so I'm open to anything anyone has to offer if I'm going about things the wrong way or such.
Thanks.
The default installation will install everything in your install_requires and packages sections. Packages under extras_require will never be installed by your setup.py script; only if you create some logic for this kind of installation.
By default, setuptools and distutils have no way to explictly say "install using these sections of my setup call". You would need to implement by yourself.
pip always installs packages using no parameter to setup.py but --single-version-externally-managed, to tell setuptools to never create a .egg file.
Conclusion: if you want people to install your package by using pip, forget --bare and --with-textile. The best you can do is a full installation including misaka and textile as dependencies.
Maybe you should read The Hitchhiker's Guide To Packaging, specifically Creating a Package section and setuptool's specification -- it is a pretty good one, it covers everything setuptools can do.
I've got a couple packages in PyPI, and I'd like to include autocompletion features with both of them. How would you check that Bash autocompletion should be installed at all (check for /etc/bash_completion, maybe?), and how would you install it with setup.py (preferably using setuptools)?
If you're going to require OS-level packages (i.e. bash-completion), then you should distribute your library as an OS-level package. That is, in .deb, .rpm, etc. Some tips here:
Debian New Maintainer's Guide
Fedora Packaging Guidelines
As part of the package generation, you can call your setuptools script to install the Python code. To ensure bash-completion is installed, you can specify it is a required package.
You can use data_files options:
from setuptools import setup
setup(
...
data_files=[
('/etc/bash_completion.d/', ['extra/some_completion_script']),
]
)
This is somewhat related to this question. Let's say I have a package that I want to deploy via rpm because I need to do some file copying on post-install and I have some non-python dependencies I want to declare. But let's also say I have some python dependencies that are easily available in PyPI. It seems like if I just package as an egg, an unzip followed by python setup.py install will automatically take care of my python dependencies, at the expense of losing any post-install functionality and non-python dependencies.
Is there any recommended way of doing this? I suppose I could specify this in a pre-install script, but then I'm getting into information duplication and not really using setuptools for much of anything.
(My current setup involves passing install_requires = ['dependency_name'] to setup, which works for python setup.py bdist_egg and unzip my_package.egg; python my_package/setup.py install, but not for python setup.py bdist_rpm --post-install post-install.sh and rpm --install my_package.rpm.)
I think it would be best if your python dependencies were available as RPMs also, and declared as dependencies in the RPM. If they aren't available elsewhere, create them yourself, and put them in your yum repository.
Running PyPI installations as a side effect of RPM installation is evil, as it won't support proper uninstallation (i.e. uninstalling your RPM will remove your package, but leave the dependencies behind, with no proper removal procedure).