This is a specific instance of a general problem that I run into when updating packages using conda.
I have an environment that is working great on machine A. I want to transfer it to machine B.
But, machine A has GTX1080 gpus, and due to configuration I cannot control, requires cudatoolkit 10.2.
Machine B has A100 gpus, and due to configuration I cannot control, requires cudatoolkit 11.1
I can easily export Machine A's environment to yml, and create a new environment on Machine B using that yml.
However, I cannot seem to update cudatoolkit to 11.1 on that environment on Machine B.
I try
conda install cudatoolkit=11.1 -c conda-forge
and I am met with ~5 minutes of conflicts and retrying solve messages, that ultimately don't really tell me anything useful (sorry, I did not capture that to post here, it is quite voluminous).
Short of re-creating the environment from scratch on Machine B, is there any way to update just cudatoolkit to the version that is required for that machine's GPUs?
I have also tried various permutations of conda update ... with no success.
If it helps, here is the yml file from Machine A:
name: VAE180
channels:
- pytorch
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- absl-py=0.12.0=pyhd8ed1ab_0
- aiohttp=3.7.4=py38h27cfd23_1
- argh=0.26.2=pyh9f0ad1d_1002
- async-timeout=3.0.1=py_1000
- attrs=20.3.0=pyhd3deb0d_0
- blas=1.0=mkl
- blinker=1.4=py_1
- brotlipy=0.7.0=py38h8df0ef7_1001
- bzip2=1.0.8=h7b6447c_0
- c-ares=1.17.1=h36c2ea0_0
- ca-certificates=2020.12.5=ha878542_0
- cachetools=4.2.1=pyhd8ed1ab_0
- certifi=2020.12.5=py38h578d9bd_1
- cffi=1.14.5=py38h261ae71_0
- chardet=3.0.4=py38h924ce5b_1008
- click=7.1.2=pyh9f0ad1d_0
- cloudpickle=1.6.0=py_0
- configparser=5.0.2=pyhd8ed1ab_0
- cryptography=3.4.6=py38ha5dfef3_0
- cudatoolkit=10.2.89=hfd86e86_1
- cycler=0.10.0=py38_0
- dbus=1.13.18=hb2f20db_0
- docker-pycreds=0.4.0=py_0
- expat=2.2.10=he6710b0_2
- ffmpeg=4.3=hf484d3e_0
- fontconfig=2.13.1=h6c09931_0
- freetype=2.10.4=h5ab3b9f_0
- fsspec=0.8.7=pyhd8ed1ab_0
- future=0.18.2=py38h578d9bd_3
- gitdb=4.0.5=pyhd8ed1ab_1
- gitpython=3.1.14=pyhd8ed1ab_0
- glib=2.67.4=h36276a3_1
- gmp=6.2.1=h2531618_2
- gnutls=3.6.5=h71b1129_1002
- google-auth=1.24.0=pyhd3deb0d_0
- google-auth-oauthlib=0.4.1=py_2
- gql=0.1.0=py_0
- graphql-core=3.1.3=pyhd8ed1ab_0
- grpcio=1.33.2=py38heead2fc_2
- gst-plugins-base=1.14.0=h8213a91_2
- gstreamer=1.14.0=h28cd5cc_2
- gym=0.18.0=py38h81c977d_0
- icu=58.2=he6710b0_3
- idna=2.10=pyh9f0ad1d_0
- importlib-metadata=3.7.3=py38h578d9bd_0
- intel-openmp=2020.2=254
- joblib=1.0.1=pyhd3eb1b0_0
- jpeg=9b=h024ee3a_2
- kiwisolver=1.3.1=py38h2531618_0
- lame=3.100=h7b6447c_0
- lcms2=2.11=h396b838_0
- ld_impl_linux-64=2.33.1=h53a641e_7
- libffi=3.3=he6710b0_2
- libgcc-ng=9.1.0=hdf63c60_0
- libgfortran-ng=7.3.0=hdf63c60_0
- libiconv=1.15=h63c8f33_5
- libpng=1.6.37=hbc83047_0
- libprotobuf=3.14.0=h8c45485_0
- libstdcxx-ng=9.1.0=hdf63c60_0
- libtiff=4.2.0=h3942068_0
- libuuid=1.0.3=h1bed415_2
- libuv=1.40.0=h7b6447c_0
- libwebp-base=1.2.0=h27cfd23_0
- libxcb=1.14=h7b6447c_0
- libxml2=2.9.10=hb55368b_3
- lz4-c=1.9.3=h2531618_0
- markdown=3.3.4=pyhd8ed1ab_0
- matplotlib=3.3.4=py38h06a4308_0
- matplotlib-base=3.3.4=py38h62a2d02_0
- mkl=2020.2=256
- mkl-service=2.3.0=py38he904b0f_0
- mkl_fft=1.3.0=py38h54f3939_0
- mkl_random=1.1.1=py38h0573a6f_0
- multidict=5.1.0=py38h27cfd23_2
- ncurses=6.2=he6710b0_1
- nettle=3.4.1=hbb512f6_0
- ninja=1.10.2=py38hff7bd54_0
- numpy=1.19.2=py38h54aff64_0
- numpy-base=1.19.2=py38hfa32c7d_0
- nvidia-ml=7.352.0=py_0
- oauthlib=3.0.1=py_0
- olefile=0.46=py_0
- openh264=2.1.0=hd408876_0
- openssl=1.1.1j=h27cfd23_0
- packaging=20.9=pyh44b312d_0
- pandas=1.2.3=py38ha9443f7_0
- pathtools=0.1.2=py_1
- pcre=8.44=he6710b0_0
- pillow=8.1.2=py38he98fc37_0
- pip=21.0.1=py38h06a4308_0
- promise=2.3=py38h578d9bd_3
- protobuf=3.14.0=py38h2531618_1
- psutil=5.7.3=py38h8df0ef7_0
- pyasn1=0.4.8=py_0
- pyasn1-modules=0.2.7=py_0
- pycparser=2.20=pyh9f0ad1d_2
- pyglet=1.5.15=py38h578d9bd_0
- pyjwt=2.0.1=pyhd8ed1ab_0
- pyopenssl=20.0.1=pyhd8ed1ab_0
- pyparsing=2.4.7=pyh9f0ad1d_0
- pyqt=5.9.2=py38h05f1152_4
- pysocks=1.7.1=py38h578d9bd_3
- python=3.8.8=hdb3f193_4
- python-dateutil=2.8.1=pyhd3eb1b0_0
- python_abi=3.8=1_cp38
- pytorch=1.8.0=py3.8_cuda10.2_cudnn7.6.5_0
- pytorch-lightning=1.2.4=pyhd8ed1ab_0
- pytz=2021.1=pyhd3eb1b0_0
- pyyaml=5.3.1=py38h8df0ef7_1
- qt=5.9.7=h5867ecd_1
- readline=8.1=h27cfd23_0
- requests=2.25.1=pyhd3deb0d_0
- requests-oauthlib=1.3.0=pyh9f0ad1d_0
- rsa=4.7.2=pyh44b312d_0
- scikit-learn=0.24.1=py38ha9443f7_0
- scipy=1.6.1=py38h91f5cce_0
- sentry-sdk=0.20.3=pyh44b312d_0
- setuptools=52.0.0=py38h06a4308_0
- shortuuid=1.0.1=py38h578d9bd_4
- sip=4.19.13=py38he6710b0_0
- six=1.15.0=py38h06a4308_0
- smmap=3.0.5=pyh44b312d_0
- sqlite=3.35.2=hdfb4753_0
- subprocess32=3.5.4=py_1
- tensorboard=2.4.1=pyhd8ed1ab_0
- tensorboard-plugin-wit=1.8.0=pyh44b312d_0
- tensorboardx=2.1=py_0
- threadpoolctl=2.1.0=pyh5ca1d4c_0
- tk=8.6.10=hbc83047_0
- torchaudio=0.8.0=py38
- torchvision=0.9.0=py38_cu102
- tornado=6.1=py38h27cfd23_0
- tqdm=4.59.0=pyhd8ed1ab_0
- typing-extensions=3.7.4.3=0
- typing_extensions=3.7.4.3=py_0
- urllib3=1.26.4=pyhd8ed1ab_0
- wandb=0.10.20=pyhd8ed1ab_0
- watchdog=0.10.4=py38h578d9bd_0
- werkzeug=1.0.1=pyh9f0ad1d_0
- wheel=0.36.2=pyhd3eb1b0_0
- xz=5.2.5=h7b6447c_0
- yaml=0.2.5=h516909a_0
- yarl=1.6.3=py38h25fe258_0
- zipp=3.4.1=pyhd8ed1ab_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.5=h9ceee32_0
- pip:
- pytorch-lightning-bolts==0.3.0
prefix: /home/eric/miniconda3/envs/VAE180
Overly-Restrictive Constraints
I'd venture the issue is that recreating from a YAML that includes versions and builds will establish those versions and builds as explicit specifications for that environment moving forward. That is, Conda will regard explicit specifications as hard requirements that it cannot mutate and so if even a single one of the dependencies of cudatoolkit also needs to be updated in order to use version 11, Conda will not know how to satisfy it without violating those previously specified constraints.
Specifically, this is what I see when searching (assuming linux-64 platform):
$ conda search --info 'cudatoolkit[subdir="linux-64",version="11.1.*"]'
Loading channels: done
...
cudatoolkit 11.1.1 h6406543_8
-----------------------------
file name : cudatoolkit-11.1.1-h6406543_8.tar.bz2
name : cudatoolkit
version : 11.1.1
build : h6406543_8
build number: 8
size : 1.20 GB
license : NVIDIA End User License Agreement
subdir : linux-64
url : https://conda.anaconda.org/conda-forge/linux-64/cudatoolkit-11.1.1-h6406543_8.tar.bz2
md5 : 4851e7f19b684e517dc8e6b5b375dda0
timestamp : 2021-02-12 16:31:01 UTC
constraints :
- __cuda >=11.1
dependencies:
- __glibc >=2.17,<3.0.a0
- libgcc-ng >=9.3.0
- libstdcxx-ng >=9.3.0
Note how the dependencies conflict with your YAML specification. That is, the YAML locks libgcc-ng and libstdcxx-ng into versions 9.1.0, whereas cudatoolkit==11.1.1 requires versions 9.3.0 or greater.
Minimal Relaxing
It may be sufficient to edit only the specifications noted above. However, there is no simple way of telling if this might conflict with something else. It's worth a try though. That is, edit the YAML to have:
dependencies:
...
- cudatoolkit=11.1
- libgcc-ng
- libstdcxx-ng
...
with the idea of leaving off the constraints on the dependencies, and let Conda solve them with whatever works.
Liberal Constraints
A more liberal way of exporting an environment is to use the --from-history flag:
conda env export --from-history -n VAE180 > VAE180.minimal.yaml
This will only export the explicit constraints from the user history, which may or may not include versions and builds, depending on how the user originally created the environment. Usually, it is far less constrained than what the default export will generate. One could then edit this YAML to have cudatoolkit=11.1 and then try to create the env with that.
The downside to this approach is that many of the other packages will likely take on newer versions, so it isn't as faithful a replication of the original environment as before.
Basically, I will always reinstall a new 11.1 Conda env for a new A100. Conda will install almost everything you need. Then, you can install extra packages for you projects. The count of extra packages is not that much for me. Even torch or tf env, or all scipy/pandas/sklearn stack would take you long.
Related
I am trying to install nvidial toolkit with cuda 11.6.0
It keeps giving me this message, I have tried several methods to fix it but none of them worked.
I am using this command: conda install cuda --channel nvidia/
label/cuda-11.6.0
Collecting package metadata (current_repodata.json): done
Solving environment: \
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- defaults/win-64::anaconda==custom=py39_1
- defaults/win-64::anaconda-navigator==2.1.1=py39_0
- defaults/win-64::bcrypt==3.2.0=py39h196d8e1_0
- defaults/noarch::black==19.10b0=py_0
- defaults/noarch::bleach==4.0.0=pyhd3eb1b0_0
- defaults/win-64::bokeh==2.4.1=py39haa95532_0
- defaults/noarch::conda-token==0.3.0=pyhd3eb1b0_0
- defaults/noarch::dask==2021.10.0=pyhd3eb1b0_0
- defaults/noarch::dask-core==2021.10.0=pyhd3eb1b0_0
- defaults/win-64::distributed==2021.10.0=py39haa95532_0
- defaults/noarch::ipywidgets==7.6.5=pyhd3eb1b0_1
- defaults/win-64::jupyter==1.0.0=py39haa95532_7
- defaults/noarch::jupyterlab==3.2.1=pyhd3eb1b0_1
- defaults/noarch::jupyterlab_server==2.8.2=pyhd3eb1b0_0
- defaults/win-64::jupyter_server==1.4.1=py39haa95532_0
- defaults/noarch::nbclassic==0.2.6=pyhd3eb1b0_0
- defaults/win-64::notebook==6.4.12=py39haa95532_0
- defaults/noarch::numpydoc==1.1.0=pyhd3eb1b0_1
- defaults/noarch::paramiko==2.7.2=py_0
- defaults/win-64::pytest==6.2.4=py39haa95532_2
- defaults/noarch::python-lsp-black==1.0.0=pyhd3eb1b0_0
- defaults/win-64::scikit-image==0.18.3=py39hf11a4ad_0
- defaults/noarch::sphinx==4.2.0=pyhd3eb1b0_1
- defaults/win-64::spyder==5.1.5=py39haa95532_1
- pytorch/win-64::torchaudio==0.11.0=py39_cu113
- defaults/win-64::widgetsnbextension==3.5.1=py39haa95532_0
- defaults/win-64::_anaconda_depends==2021.11=py39_0
- defaults/win-64::_ipyw_jlab_nb_ext_conf==0.1.0=py39haa95532_0
I have conda installed locally on my Windows PC and also installed remotely on a Linux server. I already have conda packages installed locally on my Windows PC, and I want to install the same packages on the Linux server. I have already tried the following steps:
Create a requirements.txt file containing the currently installed packages and their versions using the Anaconda Prompt on my Windows PC using the command conda list -e > requirements.txt.
Transfer this requirements.txt file to my Linux server.
Install these packages in my conda base environment using the command conda install --yes --file requirements.txt.
However, I get the following error message on my Linux server when I try to complete step 3:
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- ca-certificates==2020.12.8=haa95532_0
- m2w64-gcc-libs-core==5.3.0=7
- audioread==2.1.8=pypi_0
- pywin32-ctypes==0.2.0=py38_1000
- notebook==6.1.5=py38haa95532_0
- librosa==0.8.0=pypi_0
- numpy-base==1.19.2=py38ha3acd2a_0
- psutil==5.7.2=py38he774522_0
- numpy==1.19.2=py38hadc3359_0
- regex==2020.11.13=py38h2bbff1b_0
- spyder-kernels==1.10.0=py38haa95532_0
- appdirs==1.4.4=pypi_0
- ujson==4.0.1=py38ha925a31_0
- setuptools==51.0.0=py38haa95532_2
- sklearn-crfsuite==0.3.6=pypi_0
- pywinpty==0.5.7=py38_0
- m2w64-gmp==6.1.0=2
- pyyaml==5.3.1=py38he774522_1
- bzip2==1.0.8=he774522_0
- sounddevice==0.4.1=pypi_0
- certifi==2020.12.5=py38haa95532_0
- gpytorch==1.3.0=pypi_0
- winpty==0.4.3=4
- pyzmq==20.0.0=py38hd77b12b_1
- pytorch==1.7.1=py3.8_cpu_0
- lazy-object-proxy==1.4.3=py38h2bbff1b_2
- zeromq==4.3.3=ha925a31_3
- ipython==7.19.0=py38hd4e2768_0
- mkl_fft==1.2.0=py38h45dec08_0
- conda-package-handling==1.7.2=py38h76e460a_0
- vc==14.2=h21ff451_1
- cpuonly==1.0=0
- pip==20.3.1=py38haa95532_0
- tornado==6.1=py38h2bbff1b_0
- libarchive==3.4.2=h5e25573_0
- msys2-conda-epoch==20160418=1
- pandocfilters==1.4.3=py38haa95532_1
- scikit-learn==0.23.2=pypi_0
- torchaudio==0.7.2=py38
- soundfile==0.10.3.post1=pypi_0
- gsl==2.4=hfa6e2cd_4
- kiwisolver==1.3.0=py38hd77b12b_0
- argon2-cffi==20.1.0=py38he774522_1
- dataclasses==0.6=pypi_0
- libtiff==4.1.0=h56a325e_1
- torchvision==0.8.2=py38_cpu
- m2w64-libwinpthread-git==5.0.0.4634.697f757=2
- numba==0.51.2=pypi_0
- pooch==1.2.0=pypi_0
- cvxopt==1.2.0=py38hdc3235a_0
- tabulate==0.8.7=pypi_0
- pillow==8.0.1=py38h4fa10fc_0
- libpng==1.6.37=h2a8f88b_0
- libiconv==1.15=h1df5818_7
- rtree==0.9.4=py38h21ff451_1
- qt==5.9.7=vc14h73c81de_0
- ruamel_yaml==0.15.87=py38he774522_1
- libsodium==1.0.18=h62dcd97_0
- yaml==0.2.5=he774522_0
- m2w64-gcc-libs==5.3.0=7
- libspatialindex==1.9.3=h33f27b4_0
- jedi==0.17.2=py38haa95532_1
- tk==8.6.10=he774522_0
- six==1.15.0=py38haa95532_0
- python-crfsuite==0.9.7=pypi_0
- spyder==4.2.0=py38haa95532_0
- cffi==1.14.4=py38hcd4344a_0
- xz==5.2.5=h62dcd97_0
- console_shortcut==0.1.1=4
- sqlite==3.33.0=h2a8f88b_0
- pycosat==0.6.3=py38h2bbff1b_0
- pyrsistent==0.17.3=py38he774522_0
- markupsafe==1.1.1=py38he774522_0
- bcrypt==3.2.0=py38he774522_0
- libuv==1.40.0=he774522_0
- brotlipy==0.7.0=py38h2bbff1b_1003
- mistune==0.8.4=py38he774522_1000
- wrapt==1.11.2=py38he774522_0
- powershell_shortcut==0.0.1=3
- mkl-service==2.3.0=py38h196d8e1_0
- pysocks==1.7.1=py38haa95532_0
- typeguard==2.10.0=pypi_0
- jpeg==9b=hb83a4c4_2
- libxml2==2.9.10=hb89e7f3_3
- freetype==2.10.4=hd328e21_0
- python==3.8.5=h5fd99cc_1
- liblief==0.10.1=ha925a31_0
- sip==4.19.13=py38ha925a31_0
- scipy==1.5.4=pypi_0
- pywin32==227=py38he774522_1
- nltk==3.5=pypi_0
- py-lief==0.10.1=py38ha925a31_0
- threadpoolctl==2.1.0=pypi_0
- zlib==1.2.11=h62dcd97_4
- cudatoolkit==10.2.89=h74a9793_1
- zstd==1.4.5=h04227a9_0
- mkl_random==1.1.1=py38h47e9c7a_0
- glpk==4.65=hdc00fd2_2
- ninja==1.10.2=py38h6d14046_0
- joblib==0.17.0=pypi_0
- typed-ast==1.4.1=py38he774522_0
- pandas==1.1.3=py38ha925a31_0
- llvmlite==0.34.0=pypi_0
- resampy==0.2.2=pypi_0
- pynacl==1.4.0=py38h62dcd97_1
- vs2015_runtime==14.27.29016=h5e58377_2
- icu==58.2=ha925a31_3
- matplotlib-base==3.3.2=py38hba9282a_0
- menuinst==1.4.16=py38he774522_1
- pyqt==5.9.2=py38ha925a31_4
- cryptography==3.3.1=py38hcd4344a_0
- jupyter_core==4.7.0=py38haa95532_0
- ax-platform==0.1.19=pypi_0
- botorch==0.3.3=pypi_0
- win_inet_pton==1.1.0=py38haa95532_0
- pkginfo==1.6.1=py38haa95532_0
- openssl==1.1.1i=h2bbff1b_0
- wincertstore==0.2=py38_0
- matplotlib==3.3.2=pypi_0
- lz4-c==1.9.2=hf4a77e7_3
- pandoc==2.11=h9490d1a_0
- conda==4.9.2=py38haa95532_0
- ad3==2.3.dev0=pypi_0
- retrying==1.3.3=pypi_0
- plotly==4.14.1=pypi_0
- m2w64-gcc-libgfortran==5.3.0=6
- watchdog==0.10.4=py38haa95532_0
- chardet==3.0.4=py38haa95532_1003
Current channels:
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/free/linux-64
- https://repo.anaconda.com/pkgs/free/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/pro/linux-64
- https://repo.anaconda.com/pkgs/pro/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
I am aware that the problem is that I am not looking in the correct conda channels, as the error message suggests, but I am not sure how to solve this problem.
Thanks for the help.
The official documentation has some additional steps to have environment working cross-platform. Here is the link to that.
However, if you are not using packages that are only available in anaconda channels, you can do the following.
Have pip on both your conda environment (windows and linux server)
Make requirements.txt file using pip freeze instead of conda's one. From the windows machine
$ pip freeze > requirements.txt
Install the packages in linux server normally with pip.
$ pip install -r requirements.txt
I am not saying that it is the best option. But this way usually is easier can support other environment management packages like pyenv.
Fist of all, I'm a total newbie, please bear my idiocy :)
I run this:
conda env create -f env.yml
Here's the yml file:
name: DAND
channels: !!python/tuple
- defaults
dependencies:
- _nb_ext_conf=0.3.0=py27_0
- anaconda-client=1.6.0=py27_0
- appnope=0.1.0=py27_0
- backports=1.0=py27_0
- backports_abc=0.5=py27_0
- beautifulsoup4=4.5.1=py27_0
- clyent=1.2.2=py27_0
- configparser=3.5.0=py27_0
- cycler=0.10.0=py27_0
- decorator=4.0.10=py27_1
- entrypoints=0.2.2=py27_0
- enum34=1.1.6=py27_0
- freetype=2.5.5=1
- functools32=3.2.3.2=py27_0
- get_terminal_size=1.0.0=py27_0
- icu=54.1=0
- ipykernel=4.5.2=py27_0
- ipython=5.1.0=py27_1
- ipython_genutils=0.1.0=py27_0
- ipywidgets=5.2.2=py27_0
- jinja2=2.8=py27_1
- jsonschema=2.5.1=py27_0
- jupyter=1.0.0=py27_3
- jupyter_client=4.4.0=py27_0
- jupyter_console=5.0.0=py27_0
- jupyter_core=4.2.1=py27_0
- libpng=1.6.22=0
- markupsafe=0.23=py27_2
- matplotlib=1.5.3=np111py27_1
- mistune=0.7.3=py27_1
- mkl=11.3.3=0
- nb_anacondacloud=1.2.0=py27_0
- nb_conda=2.0.0=py27_0
- nb_conda_kernels=2.0.0=py27_0
- nbconvert=4.2.0=py27_0
- nbformat=4.2.0=py27_0
- nbpresent=3.0.2=py27_0
- nltk=3.2.1=py27_0
- notebook=4.3.0=py27_0
- numpy=1.11.2=py27_0
- openssl=1.0.2j=0
- pandas=0.19.1=np111py27_0
- path.py=8.2.1=py27_0
- pathlib2=2.1.0=py27_0
- pexpect=4.0.1=py27_0
- pickleshare=0.7.4=py27_0
- pip=9.0.1=py27_1
- prompt_toolkit=1.0.9=py27_0
- ptyprocess=0.5.1=py27_0
- pygments=2.1.3=py27_0
- pymongo=3.3.0=py27_0
- pyparsing=2.1.4=py27_0
- pyqt=5.6.0=py27_1
- python=2.7.12=1
- python-dateutil=2.6.0=py27_0
- python.app=1.2=py27_4
- pytz=2016.10=py27_0
- pyyaml=3.12=py27_0
- pyzmq=16.0.2=py27_0
- qt=5.6.2=0
- qtconsole=4.2.1=py27_1
- readline=6.2=2
- requests=2.12.3=py27_0
- scikit-learn=0.17.1=np111py27_2
- scipy=0.18.1=np111py27_0
- seaborn=0.7.1=py27_0
- setuptools=27.2.0=py27_0
- simplegeneric=0.8.1=py27_1
- singledispatch=3.4.0.3=py27_0
- sip=4.18=py27_0
- six=1.10.0=py27_0
- sqlite=3.13.0=0
- ssl_match_hostname=3.4.0.2=py27_1
- terminado=0.6=py27_0
- tk=8.5.18=0
- tornado=4.4.2=py27_0
- traitlets=4.3.1=py27_0
- unicodecsv=0.14.1=py27_0
- wcwidth=0.1.7=py27_0
- wheel=0.29.0=py27_0
- widgetsnbextension=1.2.6=py27_0
- xlrd=1.0.0=py27_0
- yaml=0.1.6=0
- zlib=1.2.8=3
- pip:
- backports-abc==0.5
- backports.shutil-get-terminal-size==1.0.0
- backports.ssl-match-hostname==3.4.0.2
- ipython-genutils==0.1.0
- jupyter-client==4.4.0
- jupyter-console==5.0.0
- jupyter-core==4.2.1
- nb-anacondacloud==1.2.0
- nb-conda==2.0.0
- nb-conda-kernels==2.0.0
- prompt-toolkit==1.0.9
prefix: /Users/mat/anaconda/envs/DAND
The error I run into:
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- jupyter_console==5.0.0=py27_0
- freetype==2.5.5=1
- pyzmq==16.0.2=py27_0
- configparser==3.5.0=py27_0
- scipy==0.18.1=np111py27_0
- libpng==1.6.22=0
- ...then the list goes on and list all of the dependencies in the yml file, except the ones under pip
Things I've attempted:
I got this yaml file from a Udacity online class I'm taking, I downloaded from the website, so I don't think conda env export --no-builds > env.yml method applies to me.
I tried the solution in here, I simply moved everything under pip block, and run into a new error. Maybe I'm misunderstanding the solution.
The new error I run into:
Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you.
Collecting package metadata (repodata.json): done
Solving environment: done
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Ran pip subprocess with arguments:
['/Users/yulia/anaconda3/envs/DAND/bin/python', '-m', 'pip', 'install', '-U', '-r', '/Users/yulia/data analysis -uda/condaenv.mo_ctuap.requirements.txt']
Pip subprocess output:
Pip subprocess error:
ERROR: Double requirement given: backports_abc==0.5=py27_0 (from -r /Users/yulia/data analysis -uda/condaenv.mo_ctuap.requirements.txt (line 12)) (already in backports-abc==0.5 (from -r /Users/yulia/data analysis -uda/condaenv.mo_ctuap.requirements.txt (line 1)), name='backports-abc')
CondaEnvException: Pip failed
I read some other posts suggesting to use pip to install the requirements.txt file, and some posts about "CondaEnvException: Pip failed" situation. But they didn't write explicit solutions, most of the time I'm really confused about those solutions.
Please let me know what I'm missing here, this is getting frustrating as I cannot set up the proper environment to continue the class. Thank you so much in advance!
UPDATE
It seems that things might work better in the end when you skip using the env file. Instead, create an env with required dependencies manually, this way libraries are up-to-date and notebooks appear to work properly.
$ conda create -n DAND python=2 numpy pandas matplotlib seaborn
Look for required libraries in your course's "Setting up your system" (or similar) section. The ones in my example are based on Udacity's "Intro to Data Analysis" course.
Older answer
I had a similar problem and what eventually worked for me was adding two more channels in the channels section of this YAML file.
Before:
channels: !!python/tuple
- defaults
After:
channels: !!python/tuple
- defaults
- conda-forge
- anaconda
Then all the packages even with the version restrictions were found.
In case you get some errors about conflicting version, make sure to set conda config channel_priority to false:
$ conda config --set channel_priority false
I would like to know how to install python libraries using yml file without making a new environment. I already have tensorflow environment in conda. I want to install list of libraries into this tensorflow environment. It is the only way I know manually add each of these libraries but it is very hard to do this list. Please give me solution for that
This is yml file:
name: virtual_platform
channels:
- menpo
- conda-forge
- peterjc123
- defaults
dependencies:
- ffmpeg=3.2.4=1
- freetype=2.7=vc14_1
- imageio=2.2.0=py35_0
- libtiff=4.0.6=vc14_7
- olefile=0.44=py35_0
- pillow=4.2.1=py35_0
- vc=14=0
- alabaster=0.7.10=py35_0
- astroid=1.5.3=py35_0
- babel=2.5.0=py35_0
- bleach=1.5.0=py35_0
- certifi=2016.2.28=py35_0
- cffi=1.10.0=py35_0
- chardet=3.0.4=py35_0
- colorama=0.3.9=py35_0
- decorator=4.1.2=py35_0
- docutils=0.14=py35_0
- entrypoints=0.2.3=py35_0
- html5lib=0.9999999=py35_0
- icu=57.1=vc14_0
- imagesize=0.7.1=py35_0
- ipykernel=4.6.1=py35_0
- ipython=6.1.0=py35_0
- ipython_genutils=0.2.0=py35_0
- isort=4.2.15=py35_0
- jedi=0.10.2=py35_2
- jinja2=2.9.6=py35_0
- jpeg=9b=vc14_0
- jsonschema=2.6.0=py35_0
- jupyter_client=5.1.0=py35_0
- jupyter_core=4.3.0=py35_0
- lazy-object-proxy=1.3.1=py35_0
- libpng=1.6.30=vc14_1
- markupsafe=1.0=py35_0
- mistune=0.7.4=py35_0
- mkl=2017.0.3=0
- nbconvert=5.2.1=py35_0
- nbformat=4.4.0=py35_0
- numpy=1.13.1=py35_0
- numpydoc=0.7.0=py35_0
- openssl=1.0.2l=vc14_0
- pandocfilters=1.4.2=py35_0
- path.py=10.3.1=py35_0
- pickleshare=0.7.4=py35_0
- pip=9.0.1=py35_1
- prompt_toolkit=1.0.15=py35_0
- psutil=5.2.2=py35_0
- pycodestyle=2.3.1=py35_0
- pycparser=2.18=py35_0
- pyflakes=1.6.0=py35_0
- pygments=2.2.0=py35_0
- pylint=1.7.2=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- pyzmq=16.0.2=py35_0
- qt=5.6.2=vc14_6
- qtawesome=0.4.4=py35_0
- qtconsole=4.3.1=py35_0
- qtpy=1.3.1=py35_0
- requests=2.14.2=py35_0
- rope=0.9.4=py35_1
- setuptools=36.4.0=py35_1
- simplegeneric=0.8.1=py35_1
- singledispatch=3.4.0.3=py35_0
- sip=4.18=py35_0
- six=1.10.0=py35_1
- snowballstemmer=1.2.1=py35_0
- sphinx=1.6.3=py35_0
- sphinxcontrib=1.0=py35_0
- sphinxcontrib-websupport=1.0.1=py35_0
- spyder=3.2.3=py35_0
- testpath=0.3.1=py35_0
- tornado=4.5.2=py35_0
- traitlets=4.3.2=py35_0
- vs2015_runtime=14.0.25420=0
- wcwidth=0.1.7=py35_0
- wheel=0.29.0=py35_0
- win_unicode_console=0.5=py35_0
- wincertstore=0.2=py35_0
- wrapt=1.10.11=py35_0
- zlib=1.2.11=vc14_0
- opencv3=3.1.0=py35_0
- pytorch=0.1.12=py35_0.1.12cu80
- torch==0.1.12
- torchvision==0.1.9
- pip:
- ipython-genutils==0.2.0
- jupyter-client==5.1.0
- jupyter-core==4.3.0
- prompt-toolkit==1.0.15
- pyyaml==3.12
- rope-py3k==0.9.4.post1
- torch==0.1.12
- torchvision==0.1.9
- win-unicode-console==0.5
You can use the conda env update command:
conda env update --name <your env name> -f <your file>.yml
or, if the environment you want to update is already activated, then
conda env update -f <your file>.yml
If you want to create the environment from your yml file:
conda env create -f environment.yml
The name of your environment is virtual_platform. If you want another name, just edit your yml name to desired name.
It is not recommended to install packages to your base environment but if that is what you want, and I believe you should not, you need to create a requirement.txt from dependencies listed on your yml.
Copy and paste all the dependencies
packages and there version to requirements.txt as:
python ==3.5
ffmpeg=3.2.4
freetype=2.7
imageio=2.2.0
...
Then do:
conda install --yes --file requirements.txt
The problem is that this will fail if any dependence fail to install. So I will recommend installing using yml which means having an environment separate from the rest.
I just need to import an Anaconda .yml environmental file in virtualenv virtual environment.
The reason I need to do this is because on nVidia Jetson TX2 developer board I cannot install and run Anaconda distribution (It is not compatible with ARM architecture). Virtualenv and Jupyter, instead, are installed and run flawlessly.
The .yml file is listed like this:
name: tfdeeplearning
channels:
- defaults
dependencies:
- bleach=1.5.0=py35_0
- certifi=2016.2.28=py35_0
- colorama=0.3.9=py35_0
- cycler=0.10.0=py35_0
- decorator=4.1.2=py35_0
- entrypoints=0.2.3=py35_0
- html5lib=0.9999999=py35_0
- icu=57.1=vc14_0
- ipykernel=4.6.1=py35_0
- ipython=6.1.0=py35_0
- ipython_genutils=0.2.0=py35_0
- ipywidgets=6.0.0=py35_0
- jedi=0.10.2=py35_2
- jinja2=2.9.6=py35_0
- jpeg=9b=vc14_0
- jsonschema=2.6.0=py35_0
- jupyter=1.0.0=py35_3
- jupyter_client=5.1.0=py35_0
- jupyter_console=5.2.0=py35_0
- jupyter_core=4.3.0=py35_0
- libpng=1.6.30=vc14_1
- markupsafe=1.0=py35_0
- matplotlib=2.0.2=np113py35_0
- mistune=0.7.4=py35_0
- mkl=2017.0.3=0
- nbconvert=5.2.1=py35_0
- nbformat=4.4.0=py35_0
- notebook=5.0.0=py35_0
- numpy=1.13.1=py35_0
- openssl=1.0.2l=vc14_0
- pandas=0.20.3=py35_0
- pandocfilters=1.4.2=py35_0
- path.py=10.3.1=py35_0
- pickleshare=0.7.4=py35_0
- pip=9.0.1=py35_1
- prompt_toolkit=1.0.15=py35_0
- pygments=2.2.0=py35_0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- pyzmq=16.0.2=py35_0
- qt=5.6.2=vc14_6
- qtconsole=4.3.1=py35_0
- requests=2.14.2=py35_0
- scikit-learn=0.19.0=np113py35_0
- scipy=0.19.1=np113py35_0
- setuptools=36.4.0=py35_1
- simplegeneric=0.8.1=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_1
- testpath=0.3.1=py35_0
- tk=8.5.18=vc14_0
- tornado=4.5.2=py35_0
- traitlets=4.3.2=py35_0
- vs2015_runtime=14.0.25420=0
- wcwidth=0.1.7=py35_0
- wheel=0.29.0=py35_0
- widgetsnbextension=3.0.2=py35_0
- win_unicode_console=0.5=py35_0
- wincertstore=0.2=py35_0
- zlib=1.2.11=vc14_0
- pip:
- ipython-genutils==0.2.0
- jupyter-client==5.1.0
- jupyter-console==5.2.0
- jupyter-core==4.3.0
- markdown==2.6.9
- prompt-toolkit==1.0.15
- protobuf==3.4.0
- tensorflow==1.3.0
- tensorflow-tensorboard==0.1.6
- werkzeug==0.12.2
- win-unicode-console==0.5
prefix: C:\Users\Marcial\Anaconda3\envs\tfdeeplearning
pip can install from a requirements.txt file, which would look like
the items in the sequence that is the value for the key
pip in your .yml file, but without the dashes:
ipython-genutils==0.2.0
jupyter-client==5.1.0
jupyter-console==5.2.0
jupyter-core==4.3.0
markdown==2.6.9
prompt-toolkit==1.0.15
protobuf==3.4.0
tensorflow==1.3.0
tensorflow-tensorboard==0.1.6
werkzeug==0.12.2
win-unicode-console==0.5
Assuming that the end of your file actually looks like:
.
.
.
- wincertstore=0.2=py35_0
- zlib=1.2.11=vc14_0
- pip:
- ipython-genutils==0.2.0
- jupyter-client==5.1.0
- jupyter-console==5.2.0
- jupyter-core==4.3.0
- markdown==2.6.9
- prompt-toolkit==1.0.15
- protobuf==3.4.0
- tensorflow==1.3.0
- tensorflow-tensorboard==0.1.6
- werkzeug==0.12.2
- win-unicode-console==0.5
prefix: C:\Users\Marcial\Anaconda3\envs\tfdeeplearning
(i.e. the entry for pip is indented to make this a valid YAML file),
and is named anaconda-project.yml, you can do:
import ruamel.yaml
yaml = ruamel.yaml.YAML()
data = yaml.load(open('anaconda-project.yml'))
requirements = []
for dep in data['dependencies']:
if isinstance(dep, str):
package, package_version, python_version = dep.split('=')
if python_version == '0':
continue
requirements.append(package + '==' + package_version)
elif isinstance(dep, dict):
for preq in dep.get('pip', []):
requirements.append(preq)
with open('requirements.txt', 'w') as fp:
for requirement in requirements:
print(requirement, file=fp)
resulting in a requirement.txt file, which can be used with:
pip install -r requirements.txt
Please note:
the non-pip packages might not be available from PyPI
the current pip version is 18.1, the one in that requirements list is old
that according to the official YAML FAQ, using .yml as an
extension for your YAML file should only be done if the recommended
.yaml extension. On modern filesystems that is never the case. I
don't know if Anaconda is, as so often, non-conform, or that you
have a choice in the matter.
since the introduction of binary wheels a few years ago, and many
packages supporting them, it is often (and for me always) possible
to just use virtualenvs and pip. And thereby circumventing the
problems caused by Anaconda not being 100% compliant and not being
up-to-date with all its packages (compared to PyPI).