what does # mean in case of `pip install <package> # path`? - python

I just came across this from a project on GitHub
pip install colorama # file:///home/conda/feedstock_root/build_artifacts/colorama_1602866480661/work
what does # do ?
assuming it decides the path where to install, I tried using any path and it wouldn't work like that
Also, why would we want to do so?
Also, what is the significance of file:///
Here is the link to the project
https://github.com/sstzal/DFRF/blob/main/requirements.txt
Thanks for your attention

This notation with an # is specified here in the "Direct references" section of PEP 440.
The part after the # is a direct reference to a location where the project can be installed from (i.e. the source, not the destination).
It can be URL to a file archive (sdist or wheel), or a VCS repository (git, for example).
The file:// notation is meant for the (local) filesystem protocol (as opposed to the https:// protocol for example).

Related

ReadTheDocs + Sphinx + setuptools_scm: how to?

I have a project where I manage the version through git tags.
Then, I use setuptools_scm to get this information in my setup.py and also generates a file (_version.py) that gets included when generating the wheel for pip.
This file is not tracked by git since:
it has the same information that can be gathered by git
it would create a circular situation where building the wheel will modify the version which changes the sources and a new version will be generated
Now, when I build the documentation, it becomes natural to fetch this version from _version.py and this all works well locally.
However, when I try to do this within ReadTheDocs, the building of the documentation fails because _version.py is not tracked by git, so ReadTheDocs does not find it when fetching the sources from the repository.
EDIT: I have tried to use the method proposed in the duplicate, which is the same as what setuptools_scm indicate in the documentation, i.e. using in docs/conf.py:
from pkg_resources import get_distribution
__version__ = get_distribution('numeral').version
... # I use __version__ to define Sphinx variables
but I get:
pkg_resources.DistributionNotFound: The 'numeral' distribution was not found and is required by the application
(Again, building the documentation locally works correctly.)
How could I solve this issue without resorting to maintaining the version number in two places?
Eventually the issue was that ReadTheDocs did not have the option to build my package active by default and I was expecting this to happen.
All I had to do was to enable "Install Project" in the Advanced Settings / Default Settings.

How to a list private Python packages as Conda requirement?

I've got a need to create and ship conda envs that list packages that need to remain private. It would be especially handy to list dependencies using an URL to a (company internal) GitLab instance.
Is there a way to register dependencies with conda using a repo URL? Is there also some other way to include Python packages you have a source distribution for, but cannot be hosted on a regular channel?
Thanks.
If you know before hand what needs to remain private ship direct-reference eggs, or used zoned index-urls, and extra-index-urls, or in the conda-meta stuff like here:
# requirements.txt
gevent
publicthing==1.2
someother==0.1
# private packages
file://package/egg/here
-e git+ssh://priv.gitlab.some.org/some/privpack.git#egg=privpack
--extra-index-url https://build.priv.gitlab.some.org/some/pypi/simple
I'd guess private here would mean sdist/dist build artifacts like tars, eggs, wheels, some URI/URL only accessible on a local network.
Like where the package is hosted should be indicator enough of labeling something as "private". Like the build artifacts are available, or they are not through some availability mechanism. (network location, building locally, shipped binaries, etc)
using pypi/pip.
https://pip.readthedocs.io/en/1.1/requirements.html#requirements-file-format
conda meta build info :
source:
- url: https://build.priv.gitlab.some.org/some/pypi/simple/privpack/a.tar.bz2
folder: stuff
- url: https://build.priv.gitlab.some.org/some/pypi/simple/privpack/b.tar.bz2
folder: stuff
https://conda.io/docs/user-guide/tasks/build-packages/define-metadata.html
examples:
https://github.com/conda/conda-recipes
https://github.com/conda/conda-recipes/blob/c2eb600f8545cd21aa9e50a8bb8a81df7fd3c915/r-packages/r-yaml/meta.yaml#L10
https://github.com/conda/conda-recipes/blob/a796713805ac8eceed191c0cb475b51f4d00718c/python/pyserial/meta.yaml#L5
https://conda.io/docs/user-guide/tasks/build-packages/define-metadata.html#source-from-git
https://conda.io/docs/user-guide/tasks/build-packages/define-metadata.html#source-from-a-local-path
related :
https://docs.anaconda.com/anaconda-repository/admin-guide/install/config/config-client#kerberos-configuration
https://docs.anaconda.com/anaconda-repository/admin-guide/install/config/kerberos-example
https://docs.anaconda.com/anaconda-repository/admin-guide/install/config/config-client#pip-configuration
https://pip.readthedocs.io/en/1.1/requirements.html#git

Deploy Ablog to github pages

I've just started using the ABlog plugin for sphinx to create a static-site blog.
Is it easy to change ablog deploy to deploy to a different location,
e.g. ../username.github.io/ instead of ./username.github.io/?
I have my ABlog project under source control in a git repository. Creating my username.github.io inside the current ABlog project creates a repo inside a repo and this causes errors (also I don't want to store the built site along with the ABlog repository -- although I could add a .gitignore).
Is it easy to change ablog deploy to deploy to a different location,
e.g. ../username.github.io/ instead of ./username.github.io/?
For ABlog ≥ 0.8.0, yes
For ablog-0.8.0 and above, you can use the -p option to specify a github repo location other than the default (<location of conf.py>/<your username>.github.io):
ablog deploy -p /the/path/for/your/local/github/pages/repo
i.e., in your case
ablog deploy -p ../username.github.io/
How to install the most recent ABlog version
Until version 0.8.0 is available on pypi, you can tell pip to install ablog directly from git:
pip install git+https://github.com/abakan/ablog.git
For Ablog < 0.8.0, no
For versions prior to 0.8.0, the old version of this answer applies:
With the current implementation of ABlog-internal function
ablog_deploy,
the location of the target repository cannot be changed:
String gitdir (holding the path where the local repository will be
created) is set
to
<confdir>/<github_pages option>.github.io but the `github_pages` option is also [used to choose the remote
repository](https://github.com/abakan/ablog/blob/0ed765d95a23ad7dce48c755773ac60dd08cf319/ablog/commands.py#L338),
so passing something else than the GitHub account name will make the
process fail.
Manipulating confdir would be difficult and would result in the
configuration
file
not being found and probably a bunch of other side effects.
However, if you're willing to modify ABlog's source code, it would not
be hard to adapt the assignment of gitdir as you see fit (maybe
introducing another option) to produce the decided effect. (E.g., make
it use confdir if your new option hasn't been set, and have it use
your new option instead if that option has been set.)

How pip determine a python package version

When I use pip to install a package from source, it will generates a version number for the package which I can see using 'pip show '. But I can't find out how that version number is generated and I can't find the version string from the source code. Can someone tell me how the version is generated?
The version number that pip uses comes from the setup.py (if you pip install a file, directory, repo, etc.) and/or the information in the PyPI index (if you pip install a package name). (Since these two must be identical, it doesn't really matter which.)
It's recommended that packages make the same string available as a __version__ attribute on their top-level module/package(s) at runtime that they put in their setup, but that isn't required, and not every package does.
And if the package doesn't expose its version, there's really no way for you to get it. (Well, unless you want to grub through the pip data trying to figure out which package owns a module and then get its version.)
Here's an example:
In the source code for bs4 (BeautifulSoup4), the setup.py file has this line:
version = "4.3.2",
That's the version that's used, directly or indirectly, by pip.
Then, inside bs4/__init__.py, there's this line:
__version__ = "4.3.2"
That means that Leonard Richardson is a nice guy who follows the recommendations, so I can import bs4; print(bs4.__version__) and get back the same version string that pip show beautifulsoup4 gives me.
But, as you can see, they're two completely different strings in completely different files. If he wasn't nice, they could be totally different, or the second one could be missing, or named something different.
The OpenStack people came up with a nifty library named PBR that helps you manage version numbers. You can read the linked doc page for the full details, but the basic idea is that it either generates the whole version number for you out of git, or verifies your specified version number (in the metadata section of setup.cfg) and appends the dev build number out of git. (This relies on you using Semantic Versioning in your git repo.)
Instead of specifying the version number in code, tools such as setuptools-scm may use tags from version control. Sometimes the magic is not directly visible. For example PyScaffold uses it, but in the project's root folder's __init__.py one may just see:
import pkg_resources
try:
__version__ = pkg_resources.get_distribution(__name__).version
except:
__version__ = "unknown"
If, for example, the highest version tag in Git is 6.10.0, then pip install -e . will generate a local version number such as 6.10.0.post0.dev23+ngc376c3c (c376c3c being the short hash of the last commit) or 6.10.0.post0.dev23+ngc376c3c.dirty (if it has uncommitted changes).
For more complicated strings such as 4.0.0rc1, they are usually hand edited in the PKG-INFO file. Such as:
# cat ./<package-name>.egg-info/PKG-INFO
...
Version: 4.0.0rc1
...
This make it unfeasible to obtain it from within any python code.

use a relative path in requirements.txt to install a tar.gz file with pip

We're using a requirements.txt file to store all the external modules needed. Every module but one is gathered from internet. The other one is stored on a folder under the one holding the requirements.txt file.
BTW, this module can be easily installed with pip install
I've tried using this:
file:folder/module
or this:
file:./folder/module
or even this:
folder/module
but always throws me an error.
Does anyone know which is the right way to do this?
Thanks
In the current version of pip (1.2.1) the way relative paths in a requirements file are interpreted is ambiguous and semi-broken. There is an open issue on the pip repository which explains the various problems and ambiguities in greater detail:
https://github.com/pypa/pip/issues/328
Long story short the current implementation does not match the description in the pip documentation, so as of this writing there is no consistent and reliable way to use relative paths in requirements.txt.
THAT SAID, placing the following in my requirements.txt:
./foo/bar/mymodule
works when there is a setup.py at the top level of the mymodule directory. Note the lack of the file:: protocol designation and the inclusion of the leading ./. This path is not relative to the requirements.txt file, but rather to the current working directory. Therefore it is necessary to navigate into the same directory as the requirements.txt and then run the command:
pip install -r requirements.txt
Its based off the current working directory (find with os.getcwd() if needed) and the relative path you provide in the requirements file.
Your requirements file should look like this:
fabric==1.13.1
./some_fir/some_package.whl
packaging==16.8
Note this will only work for .whl files not .exe
Remember to keep an eye on the pip install output for errors.
For me only the file: directive worked. This even works with AWS SAM, i.e. sam build. Here is my requirements.txt and the englishapps is my own custom Python package that I need in AWS Lambda
requests
file:englishapps-0.0.1-py3-none-any.whl
As was mentioned before, the files are relative to the current working directory, not the requirement.txt.
Since v10.0 requirements files support environment variables in the format: ${VAR_NAME}. This could be used as a mechanism to specify a file location relative to the requirements.txt. For example:
# Set REQUIREMENTS_DIRECTORY outside of pip
${REQUIREMENTS_DIRECTORY}/folder/module
Another option is to use the environment manager called Pipenv to manage this use case.
The steps after you do the pipenv install for a new project:
pipenv install -e app/deps/fastai (-e is editable, and is optional)
then you will see the following line in your Pipfile:
fastai = {editable = true,path = "./app/deps/fastai"}
here's similar issues:
https://github.com/pypa/pipenv/issues/209#issuecomment-337409290
https://stackoverflow.com/a/53507226/7032846
A solution that worked for me both for local and remote files (via Windows share). Here an example of requirements.txt
file:////REMOTE_SERVER/FOLDER/myfile.whl

Categories

Resources