There is following structure in my python project:
├───pyproject.toml
└───mypackage
│
├───lib
│ localdep-0.2.0-py3-none-any.whl
│ localdep-0.2.0.tar.gz
└───service
app.py
home.py
modules.py
I need to build mypackage using poetry and local dependency localdep from mypackage/lib/localdep-0.2.0... to be able to install mypackage just using simple pip install mypackage-0.1.0.tar.gz command without any additional files. I've tried to use path and file specifiers in pyproject.toml however I continuously get following error:
ERROR: Could not find a version that satisfies the requirement localdep(from mypackage==0.1.0) (from versions: none)
Current version of my pyproject.toml:
[build-system]
requires = [ "poetry>=0.12",]
build-backend = "poetry.masonry.api"
[tool.poetry]
name = "myproject"
version = "0.1.0"
description = "Simple demo project."
authors = ["Some Author"]
license = "MPL 2.0"
[tool.poetry.dependencies]
python = "3.7.3"
localdep = {file = "mypackage/lib/localdep-0.2.0-py3-none-any.whl"}
Does anyone know how to pass local dependency to pyproject.toml so poetry build will be able to package it in the correct way?
The use case of poetry's local dependency syntax is different from what you need, so it can't solve your problem here.
From the usage examples in those docs you can see that the path points to packages that are not part of the package itself, they always leave the root first, like this: ../some/different/location. The whole construct is only useful during development, and its logic will be run with poetry install, not poetry build.
What you want is to bundle the local dependency together with your project so that pip will know during a deployment of your project's .whl where to pull the local dependency from. But since pyproject.toml does not get packaged together with the wheel metadata, the information where to get dependencies from is no longer available, so it can't work. A built package only knows what dependencies it has, not where to get them from. This philosophy might be a bit unusual coming from other languages where it is not unusual to bundle all dependencies together with your code.
So even if you are able to build your package to include another wheel, which by default doesn't work because setuptools only includes .py files in the sdist/bdist by default, there is no way for pip to know that the dependency is reachable.
I see four options to solve your problem in a way that python supports.
Use a version of your localdep that is on pyPI, or upload it there. But if that were a possibility, you would have probably not asked this question.
If localdep is under your control and is only used by mypackage, write it to be a simple submodule of mypackage instead - read, the initial decision to make localdep its own package was overengineering, which may or may not be true.
Use a way to vendor localdep that python understands. Take this guide for an in-depth explanation with all the considerations and pitfalls, or this hacky post if you want to get it to work without needing to really understand why or how.
Deploy your package as a "wheelhouse".
What is a wheelhouse?
The wheelhouse part needs a little bit more text, but it might be closest to what you initially had in mind. In this approach, you don't need to include localdep in your project, and the whole mypackage/lib folder should best be deleted. localdep only needs to be installed into your python interpreter, which you can ensure by running pip freeze and checking the output for localdep's name and version.
That same output will then be used to build said wheelhouse with pip wheel -w wheelhouse $(pip freeze). If you don't want to include dev dependencies, delete your current virtualenv and set it up and enter it before running the wheel command with poetry install; poetry shell again.
For completeness' sake, you can build dist/myproject.whl with poetry build and throw it in there as well, then you can just use this wheelhouse to install your package wherever you like by running python -m pip install wheelhouse/* with whichever python you want.
Why use pip and not poetry for that?
Poetry is a dependency manager for developing packages, as such it doesn't deal with deployment issues as much and only targets them tangentially. This is usually fine, because pip, as we saw, is quite capable in that regard. pip is not that convenient during development, which is why poetry is nice, but poetry does not completely replace pip.
Related
I would like to know if there's a way to package a simple Python project and have it perform installation over the internet, just like when you install a module with pip.
Sure there is. This is how all the 3rd party packages we are all using did.
The formal pypa explain how to do it here.
Basically you need to package your project to a wheel file and upload it to the pypi repository. To do this you need to declare (mainly in setup.py), what is your package name, version, which sub-packages you want to pack to the wheel etc..
If your packages are required for a particular project, it is straightforward to contain them in the Git repository. You can put them in the directory named wheelhouse, which comes from the name of the previous default directory created by pip wheel.
If you put the private package foo in the wheelhouse, you can install as follows:
pip install foo -f wheelhouse
How do I actually create a release/distro of a python package that uses a git repo tag for the versioning, using setuptools and pbr?
There is plenty of information on the basic setup and configuration required:
SetupTools Documentation - setup() and setup.py configuration
Python Packaging User Guide - Installing Packages
PBR v3.1.1 documentation
StackOverflow: How to use version info generated using setuptools and pbr
But where is the simple info on how to actually create the distro?
i.e. I'm looking for whatever command finds the git tag with the version info and pulls it into the configuration info, so the source with that new version info can be distributed, and the version info is discoverable from the scripts, using a method like described in this answer.
Additional details
I'm working on a project that will be distributed to other developers only through a git repo, not through PyPi. The project will be released to users as an executable using pyinstaller, so this package distribution will only serve a few key purposes:
Install/Setup the package for other developers so that dependencies/environment can be recreated cleanly.
Manage versioning - Current plan is to use pbr to generate versions from the Git repo tags, so those tags can be our source of truth for versioning
Use pbr for other auto generation of mundane items from Git, such as authors, manifest.in file, release notes, etc.
Since setuptools docs focus on setting up a fully distributable and reusable package with PyPi and pip, and pbr docs only really tell you how to modify setuptools configuration to use pbr, I can't find the info on how to just run the distribution/release process.
I'm sure it exists somewhere in the documentation, but after several false starts I'm asking here. It is implied everywhere I look that everyone either knows how to do this or it just magically happens as a part of the process.
Am I just missing the obvious?
Update:
Based on sinoroc's answer, it appears I need to look into development mode installs. i.e. Anyone developing the project will clone from git, and then install via using setuptools development install mode.
This wasn't directly a part of the original question, but implied, and I believe will be of interest to people in the same situation (info I couldn't easily find).
More info is available in his answer on updating some of the metadata, and via this setuptools documentation link to working in "Development Mode"
In short:
python3 setup.py sdist
python3 setup.py bdist_wheel
How do I actually create a release/distro of a python package that uses a git repo tag for the versioning, using setuptools and pbr?
The usual commands to create (source and wheel) distributions of your Python package with setuptools are: python3 setup.py sdist and python3 setup.py bdist_wheel. The distributions can then be found in the dist directory by default.
Since setuptools docs focus on setting up a fully distributable and reusable package with PyPi and pip, and pbr docs only really tell you how to modify setuptools configuration to use pbr, I can't find the info on how to just run the distribution/release process.
It is true that setuptools does not document this. It only documents the differences to distutils, and it is confusing indeed. See below for actual documentation...
But where is the simple info on how to actually create the distro?
https://packaging.python.org/tutorials/packaging-projects/#generating-distribution-archives
https://docs.python.org/3/distutils/sourcedist.html
https://docs.python.org/3/distutils/builtdist.html
Update
Since you don't plan on publishing distributions of your project on an index such as PyPI, and you plan on using pyinstaller instead, then you can indeed most likely disregard the setuptools commands such as sdist and bdist_wheel.
Still you might want to know these commands for the development phase:
Use commands such as python3 setup.py --version, python3 setup.py --fullname to figure out if setuptools (and in your case pbr) is catching the right info.
Use python3 setup.py develop (or pip install --editable .) to place a pseudo link (egg-link) in your site-packages that points at your work in progress. This way your changes are always installed and importable. Important: don't use python3 setup.py install, this would copy the current version to site-packages and newer changes would not be importable.
Now I don't know how all this will work once you move on to pyinstaller. Especially since you mentioned that you want the meta info (such as the version number) to be discoverable from within your scripts. The technique with setuptools pkg_resources may or may not work in the pyinstaller context.
This is how I solved the same issue, also having read several different links.
I have created a setup.py file with this content:
from setuptools import setup, find_packages
def readme():
with open('README.rst') as f:
return f.read()
def read_other_requirements(other_type):
with open(other_type+'-requirements.txt') as f:
return f.read()
setup(
setup_requires=read_other_requirements('setup'),
pbr=True,
packages=find_packages('src'),
package_dir={'': 'src'},
include_package_data=True,
zip_safe=True
)
I have the source code in ./src. Also, I have a setup-requirements.txt, with content:
pip==18.1
pbr==5.1.1
setuptools==40.7.0
And a setup.cfg with this content:
[metadata]
name = XXXXX
description = XXXXX
description-file = README.rst
home-page = https://github.com/XXXXX/XXXXX
So first, you install the setup-requirements:
pip install -r setup-requirements.txt
Then, whenever you have locally a commit which was tagged in GitHub, you can install it using:
python setup.py install
and it will be installed with the tagged version.
You can check it by doing:
python setup.py --version
I'm trying to write a python package that can be installed from PyPI and am having trouble getting my head around how exactly to structure setup.py and requirements.txt properly.
I know that they have different semantics and different purposes with setup.py defining whats needed, and requirements.txt given exact versions. I also know that you shouldn't read requirements.txt into setup.py.
So what I need to know is how to structure setup.py and requirements.txt so that when my package is installed from PyPI the reight requirements are installed.
In my example, I need django-haystack (the latest version is 2.5.1), but my code is only compatible with django-haystack version 2.5.0, so my setup.py and requirements.txt are as shown below:
setup.py:
setup(
name='my_package',
install_requires = [
'django-haystack',
],
)
requirements.txt:
django-haystack==2.5.0
How can I structure my setup code so that when this is installed, django-haystack==2.5.0 is installed not the latest?
First, a warning: specify explicit version requirements in a setup.py file without a range will guarantee frustration for end-users in the future.
You can simply do it like so in the setup.py file.
setup(
name='my_package',
install_requires=[
'django-haystack==2.5.0',
],
)
However, if another user wish to use another package that requires django-haystack latest version, they won't be able to install your package as defined due to version conflict issues. Of course, if the package at hand is so flaky that it can't even attempt to use semantic versioning then there isn't really much can be done.
Now if all you are after is a reproducible build, the requirements.txt method can be used for explicit version requirements for all packages within your environment, which is out of band from the typical package dependency structure, however it won't suffer from the potentially crippling lockdown from conflicting requirements that aren't actually in conflict. zc.buildout is an alternative, but much more heavier but it does a lot more than just Python.
I'd like to distribute a whole virtualenv, or a bunch of Python wheels of exact versions with their runtime dependencies, for example:
pycurl
pycurl.so
libcurl.so
libz.so
libssl.so
libcrypto.so
libgssapi_krb5.so
libkrb5.so
libresolv.so
I suppose I could rely on the system to have libssl.so installed, but surely not libcurl.so of the correct version and probably not Kerberos.
What is the easiest way to package one library in a wheel with all the run-time dependency?
Or is that a fool's errand and I should package entire virtualenv?
How to do that reliably?
P.S. compiling on the fly is not an option, some modules are patched.
AFAIK, there is no good standard way to portably install dependencies with your package. Continuum has made conda for precisely this purpose. The numpy guys wrote their own distutils submodule in their package to install some complicated dependencies, and now at least some of them advocate conda as a solution. Unfortunately, you may have to make conda packages for some of these dependencies yourself.
If you're fine without portability, then targeting the package manager of the target machines will obviously work. Otherwise, for a portable package manager, conda is the only option I know of.
Alternatively, from your post ("compiling on the fly is not an option") it sounds like portability may not be an issue for you, in which case you could also install all the requirements to a prefix directory (most installers I've come across support a configure --prefix=/some/dir/ option). If you have a guaranteed single architecture, you could probably prefix-install all your dependencies to a single directory and pass that around like a file. The conda approach would probably be cleaner, but I've used prefix installs quite a bit and they tend to be one of the easiest solutions to get going.
Edit:
As for conda, it is simultaneously a package-manager and a "virtualenv"-like environment/python install. While virtualenv is added on top of an existing python install, conda takes over the whole install, so you can be more sure that all the dependencies are accounted for. Compared to pip, it is designed for adding generalized non-Python dependencies, instead of just compiling C/Cpp exentions. For more info, I would see:
pip vs conda (also recommends buildout as a possibility)
conda as a python install
As for how to use conda for your purpose, the docs explain how to create a recipe:
Conda build framework
Building a package requires a recipe. A recipe is flat directory which
contains the following files:
meta.yaml (metadata file)
build.sh (Unix build script which is executed using bash)
bld.bat (Windows build script which is executed using cmd)
run_test.py (optional Python test file)
patches to the source (optional, see below)
other resources, which are not included in the source and cannot be
generated by the build scripts.
The same recipe should be used to build a package on all platforms.
When building a package, the following steps are invoked:
read the metadata
download the source (into a cache)
extract the source in a source directory
apply the patches
create a build environment (build dependencies are installed here)
run the actual build script. The current working directory is the source
directory with environment variables set. The build script installs into
the build environment
do some necessary post processing steps: shebang, rpath, etc.
add conda metadata to the build environment
package up the new files in the build environment into a conda package
test the new conda package:
create a test environment with the package (and its dependencies)
run the test scripts
There are example recipes for many conda packages in the conda-recipes
<https://github.com/continuumio/conda-recipes>_ repo.
The :ref:conda skeleton <skeleton_ref> command can help to make skeleton recipes for common
repositories, such as PyPI <https://pypi.python.org/pypi>_.
Then, as a client, you would install the package similar to how you would install from pip
Lastly, docker may also be interesting to you, though I haven't seen it much used for Python.
You may want to look into PEX: https://pex.readthedocs.io/en/stable/whatispex.html
'Files with the .pex extension – “PEX files” or ”.pex files” – are self-contained executable Python virtual environments. PEX files make it easy to deploy Python applications: the deployment process becomes simply scp.'
I've created a Python module on Github that uses Nose for unit testing and Sphinx for generating documentation. I have two questions:
Should I include Sphinx and/or Nose in my module's dependencies in setup.py (install_requires), as they are not required for basic module functionality, only if you want to build the documentation/run tests yourself?
Should I include Sphinx and/or Nose in my module's requirements.txt on Github, for the same reasons but users that download my project from Github might be more likely to build docs/run tests?
This is my first Python module, so a bit of best practices/standards advice would be appreciated.
If nose and/or sphinx are not required for the basic functionality of your package then don't include them in setup.py. There's no point in forcing users to install packages that they might not ever use. If they eventually want to help you develop your package they can install the requisite packages themselves.
requirements.txt files should also not include development-required packages, though there's some wiggle room there.
For example, over at pandas we use requirements files for our Travis-CI builds. You can check them out here.
One thing we are considering is building our documentation on Travis-CI, as sometimes a failed doc build catches bugs that the test suite doesn't. In that case we would put sphinx in the requirements file of the Python version we use to build the documentation.
Don't include those nice-to-haves in your setup.py. You can write a requirements file for developers if you like; users won't need one. For example, call one file reqs.development:
-e . # include the package defined by setup.py in editable (development) mode
nose
sphinx
Users can pip install yourmodule or pip install https://your/tarball, developers can fork, clone and pip install -r reqs.development.