If my CUDA version is 11.2 with PyTorch CUDA version 10.2, will it cause a problem if I later install NVIDIA CUDA toolkit 11.3?
no you can not do this.
the libcudnn version need corresponding to correct cuda version
try to use cuda 11.3 for pytorch
look this answer: I can't train my NN with TensorFlow using GPU
Related
So, I had downloaded CUDA 11.2 for tensorflow keras, which worked great, but now I want to use pytorch for reinforcement learning, and currently pytorch only has two options for CUDA version for pip downloads, 10.2 and 11.3
If possible I don't wanna upgrade to cuda 11.3 or downgrade to 10.2, without doing that is it possible to download pytorch for a cuda version that is not recommended for?
Like will 10.2 recommended pytorch work for cuda 11.2 version
https://github.com/veritas9872/PyTorch-Universal-Docker-Template
Please try out this repository. It contains a template to build any version of PyTorch on any version of CUDA, cuDNN, Python, etc.
Please star it if you find it useful!
I'm having trouble using my GPU with tensorflow.
I pip installed tensorflow-gpu 2.4.1
I also installed CUDA 11.2 and cudnn 11.2, following the procedure from: https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#installwindows , also checking that all paths are fine and the libraries are at the correct place.
However, when I run tf.config.experimental.list_physical_devices('GPU') on my Jupyter Notebook, it doesn't find my GPU.
I also run tf.test.is_built_with_cuda(), which is returning True.
So is the problem that my GPU isn't supporting the current version of CUDA or cudnn? My GPU is "NVIDIA GeForce 605"
NVIDIA GeForce 605 card based on Fermi 2.0 architecture and I can see only Ampere,Turing, Volta, Pascal, Maxwell and Kepler are supported for CUDA 11.x.
As per #talonmies, GeForce 605 card never had supported for Tensorflow.
You can refer this for NVIDIA GPU cards with CUDA architectures are supported for Tensorflow.
For GPUs with unsupported CUDA architectures to use different versions of the NVIDIA libraries, you can refer Linux build from source guide.
Finally, you can also refer tested built configurations for Windows and linux.
I'm kind of new to machine/deep learning. I installed TensorFlow versions 2.4.1, I have CUDA version 11.2 but and cudNN when I want to get a list of available GPUs it returns nothing.(my GPU is 1050 ti 4GB)
I tried to install tensorflow-gpu but nothing changed.
what should I do?
Windows10-pro, single RTX 2080 Ti. I am new to Tensorflow.
I just installed tensorflow-gpu, version 2.1.0, python 3.7.7. Cuda compilation tools, release 10.1, V10.1.105. Nothing self-compiled. And I have not installed cuDNN, nor have I registered. All installation is standard, nothing self-compiled.
The tensorflow.org documentation states that cuDNN is needed to use the GPU. But my tests for GPU-usage seem to pass. For example,
tf.config.experimental.list_physical_devices('GPU') returns [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')].
It may appear that I should just install cuDNN and not lose any more sleep. But I would still want to know if I were using the GPU so I would prefer a test that is capable of failing.
Is there a true test to see if an installation will use the GPU?
In NVIDIA GPU computing toolkit, one can verify the cuDNN installation,
On windows system,
Go to
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\include\
open cudnn.h
To utilize the Tensorflow-GPU successfully, CUDA and cuDNN are required.
Some of the Tensorflow library such as tf.keras.layers.GRU(Keras GRU layers) employs the capability of cuDNN.
Check these examples provided in Tensorflow site for GPU utilization.
My tensorflow only prints out the line:
I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally when running.
Tensorflow logs on the net has lots of other libraries being loaded like libcudnn.
As I think my installation performance is not optimal, I am trying to find out if it is because of this. Any help will be appreciated!
my tf is 1.13.1
NVIDIA Driver Version: 418.67
CUDA Version: 10.1 (I have also 10.0 installed. can this be the problem?)
According to TensorFlow documentation, cuDNN is a requirement for tensorflow-gpu. If you don't have cuDNN installed, you wouldn't be able to install tensorflow-gpu since the dependency library would be missing.
So, if you have successfully installed tensorflow-gpu and are able to use it, e.g.
import tensorflow as tf
tf.Session()
you are fine.
EDIT
I just checker here and tensorflow_gpu-1.13.1 officially only supports CUDA 10.0. I would recommend to use it instead of CUDA 10.1.
Further, NVIDIA recommends using driver version 410.48 with CUDA 10.0. I would stick with it as well.
Actually i always rely on a stable setup. And i tried most of the tf - cuda - cudnn versions. But most stable was tf 1.9.0 , CUDA 9.0, Cudnn 7 for me. Used it for too long without a problem. You should give it a try if it suits you.