Good night! I have a .yml file with this structure:
name: web
channels:
- defaults
dependencies:
- zope.event=4.4=py37_0
- zope.interface=5.1.0=py37haf1e3a3_0
- zstd=1.4.5=h41d2c2f_0
- pip:
- asgiref==3.2.10
- cloudpickle==1.3.0
that is actually mutch bigger than this. When i run conda env create --file ambiente.yml i get Solving environment: failed ResolvePackageNotFound: and a list of all missing dependencies. How can a install all dependencies at once?
Related
Hopefully someone can help me, I've tried to search the internet for a solution, but can't seem to find a solution..
I'm trying to create an environment using conda env create -n deltaconv -f environment.yml and somehow conda I'm getting this response:
[Folder]>conda env create -n deltaconv -f environment.yml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- nvidia::cudatoolkit=11.3
I've just freshly installed Miniconda for the task and the environment looks like this:
channels:
- nvidia
- pytorch
- pyg
dependencies:
- pip=21.2.4
- python=3.9.12
- setuptools=52.0.0
- wheel=0.36.2
- protobuf~=3.19.0
- nvidia::cudatoolkit=11.3
- pytorch::pytorch=1.11.0
- pytorch::torchvision
- pytorch::torchaudio
- pyg::pyg=2.0.4
- pip:
- numpy==1.21.5
- progressbar2==4.0.0
- tensorboard==2.8.0
- jupyter==1.0.0
- openmesh==1.2.1
- h5py==3.6.0
- pytest==7.1.2
- deltaconv==1.0.0
Does anyone know why conda is unable to find cudatoolkit 11.3 from the nvidia channel?
I'm trying out OpenCV with Python bindings for which I'm using the following YML file:
name: opencv-python-sandbox
channels:
- menpo
- conda-forge
- defaults
dependencies:
- jupyter=1.0.0
- jupyterlab=0.34.9
- keras=2.9.0
- matplotlib=3.5.2
- numpy=1.23.1
- opencv-python==4.6.0.66
- pandas=1.4.3
- python=3.8.0
- scikit-learn=1.1.1
- scipy=1.8.1
- tensorboard=2.9.1
- tensorflow=2.9.1
When I rain it threw some errors and says that it is not able to resolve OpenCV and Tensorflow:
(ml-sandbox) joesan#joesan-InfinityBook-S-14-v5:~/Projects/Private/ml-projects/ml-sandbox/opencv-python-sandbox$ conda env create -f environment.yml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- tensorflow=2.9.1
- opencv-python==4.6.0.66
How to get this fixed? Do I need to add pip to my environment.yml and then manually install opencv via pip after activating the conda environment?
Not sure why this was not answered by anyone else as this seems to be a very common problem. Nevertheless, I was able to solve this by adding pip as a dependency in my environment.yml and use pip to install OpenCV and any other libraries that won't resolve with Conda.
My environment.yml looks like this:
name: ml-sandbox
channels:
- menpo
- conda-forge
- defaults
dependencies:
- jupyter=1.0.0
- jupyterlab=0.34.9
- keras=2.9.0
- matplotlib=3.5.2
- pandas=1.4.3
- python=3.8.0
- pip=22.1.2
- scikit-learn=1.1.1
- scipy=1.8.1
- tensorboard=2.9.1
- pip:
- numpy==1.23.1
- opencv-contrib-python==4.6.0.66
You have fixed it yourself by moving the requirements to the pip section, which results in an installation from Pypi. I just wanted to add explanation why your original attempt did not work and suggestions in case you want to strictly stick to using conda. Note that for both tensorflow and opencv, the packages provided on conda-forge are not maintained by the respective developers, often resulting in them lacking behind in versions.
The python bindings for openCV are called py-opencv on conda forge and have different version strings, so you would need to put py-opencv==4.6.0 in your yml
tensorflow on conda-forge goes only up to 2.8.1. So when strictly sticking to conda, you would need to downgrade the version
You can always check available versions for packages by using conda search -c <channel> <package-name> from your terminal
I added conda-forge to the conda channels:
$ conda config --show channels
channels:
- conda-forge
- defaults
my requirements.txt contains, among others, these lines:
ipython-genutils==0.2.0
jupyter-client==6.1.12
jupyterlab-pygments==0.1.2
appnope==0.1.2
jupyterlab-widgets==1.0.0
data==0.4
prometheus-client==0.11.0
latex==0.7.0
scipy==1.5.4
jupyter-core==4.7.1
jupyter-console==6.4.0
async-generator==1.10
vg==1.10.0
sklearn==0.0
postgis==1.0.4
When I try to create a new environment from this requirements.txt using conda with
conda create --name myenv --file requirements.txt
I get the following errors:
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- ipython-genutils==0.2.0
- jupyter-client==6.1.12
- jupyterlab-pygments==0.1.2
- appnope==0.1.2
- jupyterlab-widgets==1.0.0
- data==0.4
- prometheus-client==0.11.0
- latex==0.7.0
- scipy==1.5.4
- jupyter-core==4.7.1
- jupyter-console==6.4.0
- async-generator==1.10
- vg==1.10.0
- sklearn==0.0
- postgis==1.0.4
Current channels:
- https://conda.anaconda.org/conda-forge/linux-64
- https://conda.anaconda.org/conda-forge/noarch
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
As you can see, conda-forge is listed under "current channels" and ipython-genutils==0.2.0 is available in conda-forge. However, the package is not found. How can I fix this problem?
I tried both conda config --set channel_priority flexible and ... stable
I run Ubuntu 20.04 LTS, Python 3.10 and Conda 4.12.0
It looks to me like this should have been a requirements.txt to be used by pip. Note that conda packages can have slightly different names than what is available on pypi.
ipython-genutils is not the correct name, looking at the link you have provided, the name of the package is ipython_genutils with an underscore. The same is true for the other packages that you have written with a hyphen. They should all be spelled with an underscore.
That leaves
- sklearn==0.0
- latex==0.7.0
- vg==1.10.0
- scipy==1.5.4
- postgis==1.0.4
- data==0.4
- appnope==0.1.2
sklearn==0.0 seems to be a corrupt line in your file. The package's name is scikit-learn. latex, vg and data are not available on conda channels as far as I can tell. The same goes for scipy==1.5.4, only 1.5.3 and 1.6 are available. postgis only goes back to 2.4.3 on conda-forge , see here, but also seems to be different from what is available on pypi. appnope is a package only available for macOS, see it's description:
Simple package for disabling App Nap on macOS >= 10.9, which can be problematic.
So with that in mind, we can create a yml file that installs from both conda channels and from pip (Changes to your file: replaced - with _, removed appnope, added pip dependency, renamed sklearn to scikit-learn and moved it together with latex, scipy, vg, data, postgis to pip requirements. If you are flexible with scipy==1.5.4, I would advise to change it to scipy==1.5.3 or scipy==1.6.0 and move scipy and sklearn out of the pip installed packages):
name: myenv
dependencies:
- ipython_genutils==0.2.0
- jupyter_client==6.1.12
- jupyterlab_pygments==0.1.2
- jupyterlab_widgets==1.0.0
- prometheus_client==0.11.0
- jupyter_core==4.7.1
- jupyter_console==6.4.0
- async_generator==1.10
- pip
- pip:
- scikit-learn
- latex==0.7.0
- scipy==1.5.4
- vg==1.10.0
- data==0.4.0
- postgis==1.0.4
Save this as environment.yml and then do
conda env create -f environment.yml
According to the documentation, if I use
conda env export > file.yml
I am able to share the environment with others. For better cross-platform compatibility, a better way would be:
conda env export --from-history > file.yml
listing only the packages explicitly requested (and not their associated dependencies).
That is what I did, I created a requirement yml file with the second command. Here it is:
name: torch
channels:
- defaults
dependencies:
- python=3.8
- humanize
- nltk
- pandas
- lxml
- numpy
- bs4
- fire
- neptune-client
- tqdm
- pyyaml
- torchaudio
- pytorch
- cudatoolkit=11.3
- torchvision
Among those packages, some were installed from the channel conda-forge: channels seem to be lost in the yaml file.
Indeed, if I try and use that file for cloning the environment (same machine):
conda env create -n torch2 --file=file.yml
I get an error for the packages installed from conda-forge (I explicitly installed from conda-forge only neptune-client and fire):
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- torchaudio
- neptune-client
- fire
However, it seems that channels should be included in the yml. For example, on this github issue page I read:
Currently, conda env export does include channels information.
that closes the issue.
What am I missing?
NOTE: pytorch was installed with
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
from the official web page.
What am I missing?
The docs are wrong - or misleading (at best) when it is communicated that conda env export --from-history exports the channels that packages are installed from, or even any channels at all. This is not the behavior you get, nor is it what I get myself:
$ conda env export | head -n 8
name: smithy
channels:
- bioconda
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=1_gnu
$ conda env export --from-history | head -n 8
name: smithy
channels:
- defaults
dependencies:
- mamba
- constructor
- cookiecutter
- conda-build
Note that conda env export does include channel information, but in a highly-pinned way that's almost guaranteed to not work across platforms. So that's not going to work for your use case. I'm not sure if this is a bug or an oversight, but it's clearly not producing the desired result for the user.
Now to offer a (opinionated) recommendation on how to proceed: your best bet is to semi-manually curate an environment YAML file yourself and use that as a single source of truth. It looks like you can use your name: torch ... file a s a starting point, adding in the channels and maybe some other details as you go. Don't forget you can tie an individual package to a channel with the channel::package syntax a la
name: torch
channels:
- defaults
- conda-forge
- torch
dependencies:
- python=3.8
<SNIP>
- pytorch::torchaudio
- pytorch::pytorch
- pytorch::cudatoolkit=11.3
- pytorch::torchvision
After building a conda package and installing it into a new empty environment my package cannot be imported due to it being placed into the python3.8/site-packages directory whereas the environment's python executable and all of the package dependencies are under python3.7.
Starting from a an empty env.:
conda create -n myenv
conda install --use-local mypackage
The resulting install ends up with the following:
~/miniconda3/envs/myenv/lib/python3.8/site-packages
|-mypackage/
|-mypackage-0.0.0-py3.8.egg.info/
~/miniconda3/envs/myenv/lib/python3.7/site-packages
|- all of the dependencies...
The resulting conda env also ends up having its python version set to 3.7 as well. So obviously, now when I open a python console and attempt to import my package it fails. The perplexing thing is that I do have an import test in my meta.yml that tests importing my package that seems to pass during the conda build process.
If I pin my meta.yml python version to python=3.7 instead of python>=3.7 it works. My package ends up installed in python3.7/site-packages with everything else and it works fine.
The relevant build requirements from my meta.yml:
requirements:
build:
- setuptools
- nodejs>=14.5.0
- mkdocs>=1.1.2
- mkdocs-material>=5.4.0
- mkdocs-material-extensions>=1.0
host:
- python
run:
- python>=3.7
- rabbitmq-server>=3.7.16
- pika>=1.1.0
- pyzmq>=19.0.1
- pyyaml>=5.3.1
- numpy>=1.18.5
- sqlalchemy>=1.3.18
- sqlite>=3.28.0
- netifaces>=0.10.9
- psutil>=5.7.0
- uvloop>=0.14.0
- numexpr>=2.7.1
- fastapi>=0.59.0
- uvicorn>=0.11.3
test:
imports:
- mypackage
The relevant line from my conda recipe build.sh:
$PYTHON setup.py install