python package-dependency charts.... finding what fixes a version of a library - python

I've a conda install that's giving me an install of pysal back on version 2.1.0 (it's currently 2.5.0) ... and I'm trying to figure out which of the 73 already installed packages has specified such an old version.
I know pydeps will find me the dependencies for a given package... all the packages required to run foo - but I want to go the other way: what packages need foo to be present (and the version they ask for would be nice.)
(I've got as far as recognising it's probably going to involve hunting through the json files in conda-meta)

you can use pipdeptree:
pip install pipdeptree
pipdeptree
and get the dependencies in a tree form, like:
flake8==2.5.0
- mccabe [required: >=0.2.1,<0.4, installed: 0.3.1]
- pep8 [required: !=1.6.0,>=1.5.7,!=1.6.1,!=1.6.2, installed: 1.5.7]
- pyflakes [required: >=0.8.1,<1.1, installed: 1.0.0]
ipdb==0.8
- ipython [required: >=0.10, installed: 1.1.0]
one of the packages has installed this version but probably, it does not depend on this specific version, so you can also try to just install the most recent version and test if it works.
Test and figure out dependencies is one of the reasons why we use virtual environments (like conda) so don't be afraid of break everything, if it happens, just save the env:
conda env export > your_env_name.yml
create the virtualenv again:
conda env create -f environment.yml
or just clone your current env to another before change it:
conda create --name current_env_name --clone backup_env

Related

How to install seamsh in python

I tried to install seamsh 0.4.4(https://pypi.org/project/seamsh/) but counld not.
I spent a lot of time on it.
I needed gdal library to use seamsh.
So I installded it by conda because it was too complex to install it without conda.
But conda was not supported seamsh, so I don't know what to do.
Could you tell me how to install seamsh?
I use windows and vscode.
Thank you.
There's a pending PR to add it to Conda Forge. In the meantime, one can install it from PyPI. For example, try using the following environment:
seamsh.yaml
name: seamsh
channels:
- conda-forge
- nodefaults # only use conda-forge
dependencies:
## core software
- python=3.10 # set to preference
- pip
## libraries
- gdal
## Conda Python packages
- numpy
- scipy
- python-gmsh >=4.10 # used to ensure latest version
## PyPI Python packages
- pip:
- seamsh
Create with:
conda env create -n seamsh -f seamsh.yaml
At least on macOS, I'm seeing pip reinstall the Python gmsh package, despite the Conda Forge version being installed. Unsure why it isn't acknowledging the Conda version.

Stack a few python packages on top of a large conda environment [duplicate]

I'm looking for a solution where environments do inherit from root, but searching for the answer there seems to be a lot of confusion. Many OP questions believe they are inheriting packages when they are not. So, the search results find these questions, but the answer has the counter solution (or just explain they are mistaken).
That said, one OP actually has a similar objective. Can packages be shared across Anaconda environments? This OP says they are running out of space on their HDD. The implication being "sharing" should use the same installed packages in the new environment. The answer (not accepted) is to use --clone.
I also found this post, Do newly created conda envs inherit all packages from the base env? which says --clone does not share packages. In this post the OP believed their new environment "shared" packages, and then concludes "shared" packages don't exist.
What is the use of non-separated anaconda environments?
I tested both the --clone flag, and the Conda Docs instructions to "build identical environments" options. Both env directories have the same identical size: 2G+.
(base) $ conda list --explicit > spec-file.txt
# Produced Size On Disk: 2.14 GB (2,305,961,984 bytes)
(base) conda create --name myclone --clone root
# Produced Size On Disk, clone: 2.14 GB (2,304,331,776 bytes)
The only difference was building identical environment downloaded the packages again, and clone coppied the local file taking much less time.
I use Miniconda to deploy CLI tools to coworker workstations. Basically, the tools all use the same packages, with the occasional exception, when I need to add a particular module which I don't want in the base install.
The goal is to use conda create for environments that extend the base packages similar to virtualenv --system-site-packages, and not to duplicate their installation.
UPDATE 2020-02-08
Responding to #merv and his link to this post (Why are packages installed rather than just linked to a specific environment?) which says Conda venvs inherit base packages by default. I had another opportunity this weekend with the problem. Here is the base case:
Downloaded the Miniconda installer. Installed with settings
Install for me
Install location: (C:\Users\xtian\Miniconda3_64)
NOTE: I added the _64
Advanced Options
Add Anaconda to the system PATH variable, False
Register Anaconda as the system Python 3.7, True
I updated pip and setuptools,
conda update pip setuptools
Below, I list packages in base:
(base) C:\Users\xtian>conda list
# packages in environment at C:\Users\xtian\Miniconda3_64:
#
# Name Version Build Channel
asn1crypto 1.3.0 py37_0
ca-certificates 2020.1.1 0
certifi 2019.11.28 py37_0
cffi 1.13.2 py37h7a1dbc1_0
chardet 3.0.4 py37_1003
conda 4.8.2 py37_0
conda-package-handling 1.6.0 py37h62dcd97_0
console_shortcut 0.1.1 3
cryptography 2.8 py37h7a1dbc1_0
idna 2.8 py37_0
menuinst 1.4.16 py37he774522_0
openssl 1.1.1d he774522_3
pip 20.0.2 py37_1
powershell_shortcut 0.0.1 2
pycosat 0.6.3 py37he774522_0
pycparser 2.19 py37_0
pyopenssl 19.1.0 py37_0
pysocks 1.7.1 py37_0
python 3.7.4 h5263a28_0
pywin32 227 py37he774522_1
requests 2.22.0 py37_1
ruamel_yaml 0.15.87 py37he774522_0
setuptools 45.1.0 py37_0
six 1.14.0 py37_0
sqlite 3.31.1 he774522_0
tqdm 4.42.0 py_0
urllib3 1.25.8 py37_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_1
wheel 0.34.2 py37_0
win_inet_pton 1.1.0 py37_0
wincertstore 0.2 py37_0
yaml 0.1.7 hc54c509_2
Then I successfully create new venv:
(base) C:\Users\xtian>conda create -n wsgiserver
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: C:\Users\xtian\Miniconda3_64\envs\wsgiserver
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Here I activate the new wsgiserver virtual environment, list packages, and finally test with pip--but there is no pip! I tested today with 64 and 32 bit installers:
(base) C:\Users\xtian>conda activate wsgiserver
(wsgiserver) C:\Users\xtian>conda list
# packages in environment at C:\Users\xtian\Miniconda3_64\envs\wsgiserver:
#
# Name Version Build Channel
(wsgiserver) C:\Users\xtian>pip
'pip' is not recognized as an internal or external command,
operable program or batch file.
Should Conda environments inherit base packages?
No. The recommended workflow is to use conda create --clone to create a new standalone environment, and then mutate that environment to add additional packages. Alternatively, one can dump the template environment to a YAML (conda env export > env.yaml), edit it to include or remove packages, and then create a new environment from that (conda env create -f env.yaml -n foo).
Concern about this wasting storage is unfounded in most situations.1 There can be a mirage of new environments taking up more space than they really do, due to Conda's use of hardlinks to minimize redundancy. A more detailed analysis of this can be found in the question, Why are packages installed rather than just linked to a specific environment?.
Can Conda environments inherit base packages?
It's not supported, but it's possible. First, let's explicitly state that nested activation of Conda environments via the conda activate --stack command does not enable or help to allow inheritance of Python packages across environments. This is because it does not manipulate PYTHONPATH, but instead only keeps the previous active environment on PATH and skips the deactivate scripts. A more detailed discussion of this is available in this GitHub Issue.
Now that we've avoided that red herring, let's talk about PYTHONPATH. One can use this environment variable to include additional site-packages directories to search. So, naively, something like
conda activate foo
PYTHONPATH=$CONDA_ROOT/lib/python3.7/site-packages python
should launch Python with the packages of both base and foo available to it. A key constraint for this to work is that the Python in the new environment must match that of base up to and including the minor version (in this case 3.7.*).
Thinking through the details
While this will achieve package inheritance, we need to consider: Will this actually conserve space? I'd argue that in practice it likely won't, and here's why.
Presumably, we don't want to physically duplicate the Python installation, but the new environment must have a Python installed in order to help constrain solving for the new packages we want. To do this, we should not only match the Python version (conda create -n foo python=3.7), but rather the exact same build as base:
# first check base's python
conda list -n base '^python$'
# EXAMPLE RESULT
# Name Version Build Channel
python 3.7.6 h359304d_2
# use this when creating the environment
conda create -n foo python=3.7.6=h359304d_2
This will let Conda do its linking thing and use the same physical copy in both environments. However, there is no guarantee that Python's dependencies will also reuse the packages in base. In fact, if any compatible newer versions are available, it will download and install those.
Furthermore, let's say that we now install scikit-learn:
conda install -n foo scikit-learn
This again is going to check for the newest versions of it and its dependencies, independent of whether older but compatible versions of those dependencies are already available through base. So, more packages are unnecessarily being installed into the package cache.
The pattern here seems to be that we really want to find a way to have the foo env install new packages, but use as many of the existing packages to satisfy dependencies. And that is exactly what conda create --clone already does.2
Hence, I lose the motivation to bother with inheritance altogether.
Note
I'd speculate that for the special case of pure Python packages it may be plausible to use pip install --target from the base environment to install packages compatible with base to a location outside of base. The user could then add this directory to PYTHONPATH before launching python from base.
This would not be my first choice. I know the clone strategy is manageable; I wouldn't know what to expect with this going long-term.
[1] This will hold as long as the locations of the package cache (pkgs_dirs) and where the environment is created (which defaults to envs_dirs) are on the same volume. Configurations with multiple volumes should be using softlinks, which will ultimately have the same effect. Unless one has manually disabled linking of both types, Conda will do a decent job at silently minimizing redundancy.
[2] Technically, one might also have a stab at using the --offline flag to force Conda to use what it already has cached. However, the premise of OP is that the additional package is new, so it may not be wise to assume we already have a compatible version in the cache.

Not able to install dependencies mentioned in requirements.txt in my conda environment

I am not able to install dependencies mentioned in requirements.txt (https://github.com/liuzhen03/ADNet) via pip install. It is being installed in some different path where a memory error is raised:
ERROR: Could not install packages due to an OSError:
[Errno 28] No space left on device
I want to install it only in my Conda environment. I tried
conda install --file requirement.txt
but still getting the same error as attached in the screenshot below.
Can someone help me sort this out?
Conda and PyPI packages don't necessarily use the same package names, so one can't simply feed Conda a pip freeze file and treat it like it came from conda list --export. However, Conda environments fully support Pip, so one can take two approaches.
Option 1: PyPI-only packages
One can use Conda's YAML environment specification to define the environment and Python version, and let Pip install the rest.
adnet.yaml
name: adnet
channels:
- conda-forge
dependencies:
# core
- python=3.7
- pip
# PyPI packages
- pip:
- -r https://github.com/liuzhen03/ADNet/raw/main/requirement.txt
The specification of Python 3.7 is because the scikit-image==0.16.2 specified in the requirements file is from 2019, so that seems like the appropriate Python to be using.
Option 2: Prefer Conda packages
The other option is to do a little work and see what packages are available through Conda, and source from there first. This has the advantage that all Conda packages are precompiled and you can add in stuff like specifying a BLAS implementation or CUDA version.
Basically, the only thing not on Conda is thop, and the two other packages Conda complained about, torch and opencv-python, go by the names pytorch and py-opencv in the Conda repository, respectively. So, a basic YAML translation would look like:
adnet.yaml
name: so-adnet-conda
channels:
- pytorch
- conda-forge
dependencies:
# core
- python=3.7
- pip
# specify blas or cuda versions here
# Conda packages
- imageio=2.9.0
- matplotlib=3.3.3
- scikit-image=0.16.2
- easydict=1.9
- pytorch=1.2.0
- torchvision=0.4.0
- pillow
- py-opencv >=4.0,<5.0
- tensorboardX=2.1
- tensorboard
- future
- lmdb
- pyarrow
# PyPI packages
- pip:
- thop

why pip freeze returns some "gibberish" instead of package==VERSION?

Here is what I did:
❯ pip freeze
aiohttp # file:///Users/aiven/Library/Caches/pypoetry/artifacts/50/32/0b/b64b02b6cefa4c089d84ab9edf6f0d960ca26cfbe57fe0e693a00912da/aiohttp-3.6.2-py3-none-any.whl
async-timeout # file:///Users/aiven/Library/Caches/pypoetry/artifacts/0d/5d/3e/630122e534c1b25e36c3142597c4b0b2e9d3f2e0a9cea9f10ac219f9a7/async_timeout-3.0.1-py3-none-any.whl
attrs # file:///Users/aiven/Library/Caches/pypoetry/artifacts/7f/e7/44/32ca3c400bb4d8a2f1a91d1d3f22bbaee2f4727a037aad898fbf5d36ce/attrs-20.2.0-py2.py3-none-any.whl
chardet # file:///Users/aiven/Library/Caches/pypoetry/artifacts/c2/02/35/0d93b80c730b360c5e3d9bdc1b8d1929dbd784ffa8e3db025c14c045e4/chardet-3.0.4-py2.py3-none-any.whl
...
Version of pip:
❯ pip -V
pip 20.2.3 from /Users/aiven/projects/foobar/venv/lib/python3.8/site-packages/pip (python 3.8)
I expected something like this:
> pip freeze
foo==1.1.1
bar==0.2.1
pip freeze -h wasn't very helpful...
For context: I installed packages into virtualenv using poetry.
This seems to come from the changes to support PEP 610. In particular refer to the Freezing an environment section. The notion of what "freezing" entails has been expanded to include preserving direct url sources for packages that were installed with direct origins.
Poetry, with 1.1.0 has introduced a new installer that now handles discovery and download of artifacts for dependencies. This is different to the behaviour in 1.0.10 which simply let pip handle discover and download of required artifacts (wheels). This means that, now packages are installed using direct URL origins. This causes pip freeze to use direct reference format as specified in PEP 508 (eg: package # file://../package.whl). For those interested, the url in question will be saved in <package>-<version>.dist-info/direct_url.json in the virtual env's site directory.
You can get the old format output (not sure if this will change in the future), using the following command.
pip --disable-pip-version-check list --format=freeze

testing a virtual environment (virtualenv)

I'm not sure if it makes sense but I would like to be able to test my virtual environments to see if everything a particular project needs is indeed installed from requirements files and none of the requirements / dependencies are missing.
How could I do it?
Use yolk to spot out-of-date dependencies:
$ yolk --show-updates
Paste 1.7.2 (1.7.5.1)
PasteDeploy 1.3.3 (1.5.0)
PasteScript 1.7.3 (1.7.5)
coverage 3.4 (3.6)
…
To install missing ones, the usual way is to have a requirements.txt for pip install -r. If you mean how to initially build one of those, then running e.g. pylint on your project will uncover unsatisfied imports.

Categories

Resources