while installing tensorflow-datasets in anaconda, using CMD.exe Prompt of anaconda navigator, I am getting the message: packages will be SUPERSEDED by a higher-priority channel
conda install -c anaconda tensorflow-datasets
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: C:\Users\PRASHIK\anaconda3\envs\python_3_6_added / updated specs:tensorflow-datasets
The following packages will be downloaded:
package | build
---------------------------|-----------------
ca-certificates-2020.10.14 | 0 159 KB anaconda
certifi-2020.6.20 | py36_0 160 KB anaconda
dill-0.3.2 | py_0 65 KB anaconda
future-0.18.2 | py36_1 744 KB anaconda
googleapis-common-protos-1.52.0| py36h21ff451_0 75 KB anaconda
promise-2.3 | py36_0 37 KB anaconda
tensorflow-datasets-1.2.0 | py36_0 2.3 MB anaconda
tensorflow-metadata-0.14.0 | pyhe6710b0_1 165 KB anaconda
tqdm-4.50.2 | py_0 55 KB anaconda
------------------------------------------------------------
Total: 3.7 MB
The following NEW packages will be INSTALLED:
dill anaconda/noarch::dill-0.3.2-py_0
future anaconda/win-64::future-0.18.2-py36_1
googleapis-common~ anaconda/win-64::googleapis-common-protos-1.52.0-py36h21ff451_0
promise anaconda/win-64::promise-2.3-py36_0
tensorflow-datase~ anaconda/win-64::tensorflow-datasets-1.2.0-py36_0
tensorflow-metada~ anaconda/noarch::tensorflow-metadata-0.14.0-pyhe6710b0_1
tqdm anaconda/noarch::tqdm-4.50.2-py_0
The following packages will be SUPERSEDED by a higher-priority channel:
ca-certificates pkgs/main::ca-certificates2021.4.13-~-->anaconda::cacertificates2020.10.14-0
certifi pkgs/main::certifi-2020.12.5-py36haa9~ --> anaconda::certifi-2020.6.20py36_0
Proceed ([y]/n)?
Is this ok?
Does it causes any issue in future, if yes can someone suggest remedies, please.
Is this ok? Does it causes any issue in future, if yes can someone
suggest remedies, please.
It doesn't cause any issue. By default, conda prefers packages from a higher priority channel over any version from a lower priority channel. Therefore, you can now safely put channels at the bottom of your channel list to provide additional packages that are not in the default channels and still be confident that these channels will not override the core package set.
Conda collects all of the packages with the same name across all listed channels and processes them as follows:
Sorts packages from highest to lowest channel priority.
Sorts tied packages---packages with the same channel priority---from highest to lowest version number.
Sorts still-tied packages---packages with the same channel priority and same version---from highest to lowest build number.
Installs the first package on the sorted list that satisfies the installation specifications.
You can refer for a list of all the versions that are available for ca-certificates and certifi.
For more information on manage channels you can refer here
When I install tensorflow-gpu through Conda; it gives me the following output:
conda install tensorflow-gpu
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /home/psychotechnopath/anaconda3/envs/DeepLearning3.6
added / updated specs:
- tensorflow-gpu
The following packages will be downloaded:
package | build
---------------------------|-----------------
_tflow_select-2.1.0 | gpu 2 KB
cudatoolkit-10.1.243 | h6bb024c_0 347.4 MB
cudnn-7.6.5 | cuda10.1_0 179.9 MB
cupti-10.1.168 | 0 1.4 MB
tensorflow-2.1.0 |gpu_py36h2e5cdaa_0 4 KB
tensorflow-base-2.1.0 |gpu_py36h6c5654b_0 155.9 MB
tensorflow-gpu-2.1.0 | h0d30ee6_0 3 KB
------------------------------------------------------------
Total: 684.7 MB
The following NEW packages will be INSTALLED:
cudatoolkit pkgs/main/linux-64::cudatoolkit-10.1.243-h6bb024c_0
cudnn pkgs/main/linux-64::cudnn-7.6.5-cuda10.1_0
cupti pkgs/main/linux-64::cupti-10.1.168-0
tensorflow-gpu pkgs/main/linux-64::tensorflow-gpu-2.1.0-h0d30ee6_0
I see that installing tensorflow-gpu automatically triggers the installation of the cudatoolkit and cudnn. Does this mean that I no longer need to install CUDA and CUDNN manually anymore to be able to use tensorflow-gpu? Where does this conda installation of CUDA reside?
I first installed CUDA and CuDNN the old way (e.g. by following these installation instructions: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html )
And then I noticed that tensorflow-gpu was also installing cuda and cudnn
Do i now have two versions of CUDA/CuDNN installed and how do I check this?
Do i now have two versions of CUDA installed and how do I check this?
No.
conda installs the bare minimum redistributable library components required to support the CUDA accelerated packages they offer. The package name cudatoolkit is a complete misnomer. It is nothing of the sort. Even though it is now greatly expanded in scope from what it used to be (literally 5 files -- I think at some point they must have gotten a licensing deal from NVIDIA because some of this wasn't/isn't on the official "freely redistributable" list AFAIK), it still is basically just a handful of libraries.
You can check this for yourself:
cat /opt/miniconda3/conda-meta/cudatoolkit-10.1.168-0.json
{
"build": "0",
"build_number": 0,
"channel": "https://repo.anaconda.com/pkgs/main/linux-64",
"constrains": [],
"depends": [],
"extracted_package_dir": "/opt/miniconda3/pkgs/cudatoolkit-10.1.168-0",
"features": "",
"files": [
"lib/cudatoolkit_config.yaml",
"lib/libcublas.so",
"lib/libcublas.so.10",
"lib/libcublas.so.10.2.0.168",
"lib/libcublasLt.so",
"lib/libcublasLt.so.10",
"lib/libcublasLt.so.10.2.0.168",
"lib/libcudart.so",
"lib/libcudart.so.10.1",
"lib/libcudart.so.10.1.168",
"lib/libcufft.so",
"lib/libcufft.so.10",
"lib/libcufft.so.10.1.168",
"lib/libcufftw.so",
"lib/libcufftw.so.10",
"lib/libcufftw.so.10.1.168",
"lib/libcurand.so",
"lib/libcurand.so.10",
"lib/libcurand.so.10.1.168",
"lib/libcusolver.so",
"lib/libcusolver.so.10",
"lib/libcusolver.so.10.1.168",
"lib/libcusparse.so",
"lib/libcusparse.so.10",
"lib/libcusparse.so.10.1.168",
"lib/libdevice.10.bc",
"lib/libnppc.so",
"lib/libnppc.so.10",
"lib/libnppc.so.10.1.168",
"lib/libnppial.so",
"lib/libnppial.so.10",
"lib/libnppial.so.10.1.168",
"lib/libnppicc.so",
"lib/libnppicc.so.10",
"lib/libnppicc.so.10.1.168",
"lib/libnppicom.so",
"lib/libnppicom.so.10",
"lib/libnppicom.so.10.1.168",
"lib/libnppidei.so",
"lib/libnppidei.so.10",
"lib/libnppidei.so.10.1.168",
"lib/libnppif.so",
"lib/libnppif.so.10",
"lib/libnppif.so.10.1.168",
"lib/libnppig.so",
"lib/libnppig.so.10",
"lib/libnppig.so.10.1.168",
"lib/libnppim.so",
"lib/libnppim.so.10",
"lib/libnppim.so.10.1.168",
"lib/libnppist.so",
"lib/libnppist.so.10",
"lib/libnppist.so.10.1.168",
"lib/libnppisu.so",
"lib/libnppisu.so.10",
"lib/libnppisu.so.10.1.168",
"lib/libnppitc.so",
"lib/libnppitc.so.10",
"lib/libnppitc.so.10.1.168",
"lib/libnpps.so",
"lib/libnpps.so.10",
"lib/libnpps.so.10.1.168",
"lib/libnvToolsExt.so",
"lib/libnvToolsExt.so.1",
"lib/libnvToolsExt.so.1.0.0",
"lib/libnvblas.so",
"lib/libnvblas.so.10",
"lib/libnvblas.so.10.2.0.168",
"lib/libnvgraph.so",
"lib/libnvgraph.so.10",
"lib/libnvgraph.so.10.1.168",
"lib/libnvjpeg.so",
"lib/libnvjpeg.so.10",
"lib/libnvjpeg.so.10.1.168",
"lib/libnvrtc-builtins.so",
"lib/libnvrtc-builtins.so.10.1",
"lib/libnvrtc-builtins.so.10.1.168",
"lib/libnvrtc.so",
"lib/libnvrtc.so.10.1",
"lib/libnvrtc.so.10.1.168",
"lib/libnvvm.so",
"lib/libnvvm.so.3",
"lib/libnvvm.so.3.3.0"
]
.....
i.e. what you get is (keeping in mind most of those "files" above are just symlinks)
CUBLAS runtime
The CUDA runtime library
CUFFT runtime
CUrand runtime
CUsparse rutime
CUsolver runtime
NPP runtime
nvblas runtime
NVTX runtime
NVgraph runtime
NVjpeg runtime
NVRTC/NVVM runtime
The CUDNN package that conda installs is the redistributable binary distribution which is identical to what NVIDIA distribute -- which is exactly two files, a header file and a library.
You would still require a supported NVIDIA driver installation to make the tensorflow which conda installs work.
If you want to actually compile and build CUDA code, you need to install a separate CUDA toolkit which contains all the the development components which conda deliberately omits from their distribution.
Windows 10
conda 4.9.2 (via miniconda)
I installed a single package that did not require any other dependencies to be installed anew or upgraded. Once I realised that I had installed an unsuitable version of the package, I went to remove it, and this is the screen I was presented with:
(pydata) PS C:\Users\Navneeth> conda remove xlrd
Collecting package metadata (repodata.json): done
Solving environment: |
Warning: 2 possible package resolutions (only showing differing packages):
- defaults/win-64::libtiff-4.1.0-h56a325e_1, defaults/win-64::zstd-1.4.9-h19a0ad4_0
- defaults/win-64::libtiff-4.2.0-hd0e1b90_0, defaults/win-64::zstd-1.4.5-h04227a9done
## Package Plan ##
environment location: C:\Users\Navneeth\Miniconda3\envs\pydata
removed specs:
- xlrd
The following packages will be downloaded:
package | build
---------------------------|-----------------
decorator-5.0.3 | pyhd3eb1b0_0 12 KB
importlib-metadata-3.7.3 | py38haa95532_1 31 KB
importlib_metadata-3.7.3 | hd3eb1b0_1 11 KB
ipython-7.22.0 | py38hd4e2768_0 998 KB
jupyter_client-6.1.12 | pyhd3eb1b0_0 88 KB
libtiff-4.1.0 | h56a325e_1 739 KB
nbformat-5.1.3 | pyhd3eb1b0_0 44 KB
notebook-6.3.0 | py38haa95532_0 4.4 MB
pandoc-2.12 | haa95532_0 13.2 MB
parso-0.8.2 | pyhd3eb1b0_0 69 KB
pillow-8.2.0 | py38h4fa10fc_0 671 KB
prometheus_client-0.10.0 | pyhd3eb1b0_0 46 KB
prompt-toolkit-3.0.17 | pyh06a4308_0 256 KB
terminado-0.9.4 | py38haa95532_0 26 KB
zipp-3.4.1 | pyhd3eb1b0_0 15 KB
zstd-1.4.9 | h19a0ad4_0 478 KB
------------------------------------------------------------
Total: 21.0 MB
The following packages will be REMOVED:
xlrd-2.0.1-pyhd3eb1b0_0
The following packages will be UPDATED:
decorator 4.4.2-pyhd3eb1b0_0 --> 5.0.3-pyhd3eb1b0_0
importlib-metadata pkgs/main/noarch::importlib-metadata-~ --> pkgs/main/win-64::importlib-metadata-3.7.3-py38haa95532_1
importlib_metadata 2.0.0-1 --> 3.7.3-hd3eb1b0_1
ipython 7.21.0-py38hd4e2768_0 --> 7.22.0-py38hd4e2768_0
jupyter_client 6.1.7-py_0 --> 6.1.12-pyhd3eb1b0_0
nbformat 5.1.2-pyhd3eb1b0_1 --> 5.1.3-pyhd3eb1b0_0
notebook 6.2.0-py38haa95532_0 --> 6.3.0-py38haa95532_0
pandoc 2.11-h9490d1a_0 --> 2.12-haa95532_0
parso 0.8.1-pyhd3eb1b0_0 --> 0.8.2-pyhd3eb1b0_0
pillow 8.1.2-py38h4fa10fc_0 --> 8.2.0-py38h4fa10fc_0
prometheus_client 0.9.0-pyhd3eb1b0_0 --> 0.10.0-pyhd3eb1b0_0
prompt-toolkit 3.0.8-py_0 --> 3.0.17-pyh06a4308_0
sqlite 3.33.0-h2a8f88b_0 --> 3.35.3-h2bbff1b_0
terminado 0.9.2-py38haa95532_0 --> 0.9.4-py38haa95532_0
zipp 3.4.0-pyhd3eb1b0_0 --> 3.4.1-pyhd3eb1b0_0
zstd 1.4.5-h04227a9_0 --> 1.4.9-h19a0ad4_0
The following packages will be DOWNGRADED:
libtiff 4.2.0-he0120a3_0 --> 4.1.0-h56a325e_1
Proceed ([y]/n)?
Why does conda want to update or downgrade all these other packages when the opposite wasn't done when I installed xlrd? Is there a way that I can safely remove the just xlrd. (I hear using --force is risky.)
Asymmetry
Conda re-solves when removing. When installing, Conda first attempts a frozen solve, which amounts to keeping all installed packages fixed and just searching for a version of the requested package(s) that are compatible. In this specific case, xlrd (v2.1.0) is a noarch with only a python>=3.6 constraint. So this installs in this frozen solve pass.
The constraint xlrd will also be added to the explicit specifications.1
When removing, Conda will first remove the constraint, and then re-solves the environment with the new set of explicit specifications. It is in this solve that Conda identifies that newer versions of packages and then proposes updating then.
So, the asymmetry is that the frozen solve explicitly avoids checking for any new packages, but the removal will trigger such a check. There is not currently a way to avoid this without bypassing dependency checking.
Mamba
Actually, mamba, a compiled (fast!) drop-in replacement for conda, will remove only the specified package if it doesn't have anything depending on it. That is its default behavior in my testing.
Addendum: Still Some Unexplained Behavior
I replicated your experience by first creating an environment with two specs:
name: foo
channels:
- conda-forge
dependencies:
- python=3.8.0
- pip=20
To simulate this being an old environment, I went into the envs/foo/conda-meta/history and changed2 the line
# update specs: ['pip=20', 'python=3.8.0']
to
# update specs: ['python=3.8']
Subsequently running conda install xlrd does as expected. Then conda remove xlrd gives a somewhat odd result:
## Package Plan ##
environment location: /opt/conda/envs/foo
removed specs:
- xlrd
The following packages will be downloaded:
package | build
---------------------------|-----------------
pip-21.1.1 | pyhd8ed1ab_0 1.1 MB conda-forge
------------------------------------------------------------
Total: 1.1 MB
The following packages will be REMOVED:
xlrd-2.0.1-pyhd8ed1ab_3
The following packages will be UPDATED:
pip 20.3.4-pyhd8ed1ab_0 --> 21.1.1-pyhd8ed1ab_0
Proceed ([y]/n)?
This effectively replicates OP result, however, the additional oddity here is that the python package is not suggested to be updated, even though I had intentionally loosened its constraint from 3.8.0 to 3.8. It appears that only packages not in the explicit specifications are subject to updating during package removal.
[1] The explicit specifications are the internally maintained records that Conda keeps of every constraint a user has explicitly specified. One can view the current explicit specifications of an environment with conda env export --from-history. The raw internal records can be found at yourenv/conda-meta/history.
[2] Not a recommended practice!
Installing packages to start running some code is perhaps the hardest part of my job.
Anways, I tried installing opencv for use in anaconda python 3.6 environment. And I get the error:
conda install -c conda-forge opencv
Fetching package metadata ...........
Solving package specifications: ..........
Package plan for installation in environment C:\Program Files\Anaconda3\envs\py36:
The following packages will be downloaded:
package | build
---------------------------|-----------------
libwebp-0.5.2 | vc14_7 1.1 MB conda-forge
opencv-3.2.0 | np112py36_204 92.0 MB conda-forge
------------------------------------------------------------
Total: 93.1 MB
The following NEW packages will be INSTALLED:
libwebp: 0.5.2-vc14_7 conda-forge [vc14]
opencv: 3.2.0-np112py36_204 conda-forge
Proceed ([y]/n)? y
Fetching packages ...
libwebp-0.5.2- 100% |###############################| Time: 0:00:05 213.41 kB/s
opencv-3.2.0-n 100% |###############################| Time: 0:00:48 1.97 MB/s
Extracting packages ...
[ COMPLETE ]|##################################################| 100%
Linking packages ...
PaddingError: Placeholder of length '34' too short in package conda-forge::opencv-3.2.0-np112py36_204.
The package must be rebuilt with conda-build > 2.0.
I am on a Windows System. I do not understand the error and searching isn't helping.
Any comments or suggestions to resolve the error are welcome.
For the record, OpenCV installs fine with pip.
Tested on Windows 10 with Miniconda and Python 3.6:
> pip search opencv
...
opencv-python
...
> pip install opencv-python
Tells me Requirement already satisfied.
To make sure it was correctly installed, run:
> python
>>> import cv2
>>>
Go to the root conda environment.
And do conda update conda.
Then just import cv2 and use it.
I am trying to calculate levenshtein distance between 2 strings. Tried to install 2 packages (python-levenshtein) and pylev
Used ananconda (on Win 64 machine) for the install
conda install -c https://conda.anaconda.org/trent pylevenshtein
It looks like the package got installed
Fetching package metadata: ......
Solving package specifications: ..............
Package plan for installation in environment C:\Anaconda2:
The following packages will be downloaded:
package | build
---------------------------|-----------------
pylevenshtein-0.10.1 | py27_0 34 KB
setuptools-20.1.1 | py27_0 674 KB
------------------------------------------------------------
Total: 707 KB
The following NEW packages will be INSTALLED:
pylevenshtein: 0.10.1-py27_0
The following packages will be UPDATED:
setuptools: 19.6.2-py27_0 --> 20.1.1-py27_0
Proceed ([y]/n)? y
Fetching packages ...
pylevenshtein- 100% |###############################| Time: 0:00:0042.36 kB/s
setuptools-20. 100% |###############################| Time: 0:00:02 320.43 kB/s
Extracting packages ...
[ COMPLETE ]|##################################################| 100%
Unlinking packages ...
[ COMPLETE ]|##################################################| 100%
Linking packages ...
[ COMPLETE ]|##################################################| 100%
However , when I try to import the package it says no module name pylev . Same thing happens with Python-levenshtein . Command used are (tried variants of his but doesn't seem to work)
import pylev
import Levenshtein
Unable to figure out what the problem is
Some modules (this is a C extension) must be compiled for the architecture you are using. See explained for your case here.
But you can always use compiled versions if they are available (as is the case for pylevenshtein) from Christoph Gohlke's website