How to use Tensorflow GPU with CUDA 10 on RTX 2070? - python

in a few days I will setup my new computer with a RTX 2070.
I would like to user tensorflow GPU but I can't find compatible versions of CUDA and Tensorflow GPU.
As far as I know, I need CUDA 10 to benefit from the additional computing power of the RTX's Turing architecture. But regarding to the Tensorflow website the newest version of tf (tensorflow_gpu-1.12.0) only works with CUDA 9.
I would prefer to get it all working on windows 10 but if there is no other way, linux would work as well.
Somewhere on the internet I read about two rumors:
1. there is some way to compile an unpublished version of tf-gpu which works with CUDA 10
2. they will publish an official version of tf-gpu in january 2019 (which is almost over now) which will support CUDA 10.
Can someone confirm one of those rumors (with source would be the best) or tell me how I will be able to get it all working?

You're correct that you need cuda 10 and that tensorflow-gpu currently doesn't support it. What you need to do is compile tensorflow from source like your first rumor.
Installation steps:
Install CUDA 10 and cuDNN 7.3.1
Configure Tensorflow and compile it
Install the .whl package with pip
Here are some tutorials to compile tensorflow.
Windows:
https://www.pytorials.com/how-to-install-tensorflow-gpu-with-cuda-10-0-for-python-on-windows/2/
Ubuntu:
https://medium.com/#saitejadommeti/building-tensorflow-gpu-from-source-for-rtx-2080-96fed102fcca
https://towardsdatascience.com/how-to-make-tensorflow-work-on-rtx-20xx-series-73eb409bd3c0
Alternatively
you can find the pre-built tensorflow wheels here, thus skipping step 2:
https://github.com/fo40225/tensorflow-windows-wheel

Related

New Tensorflow 2.4 GPU issues

Question
Tensorflow 2.4.1 doesn't recognize my GPU, even though I followed the official instructions from Tensorflow as well as the ones from NVIDIA for CUDA and NVIDIA for cuDNN to install it in my computer. I also installed it in conda (which I'm not sure if it is needed?).
When I try the official way to check if TF is using GPUs, I get 0:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
Num GPUs Available: 0
Specifications
Hardware:
My NVIDIA fulfills the requirements specified by Tensorflow.
Software
I installed CUDA (with CUPTI) and cuDNN as mentioned above, so I got:
ubuntu 20.04 LTS
NVIDIA driver = 460.39
CUDA (+CUPTI) = 11.2
cuDNN = 8.1.1
In a conda environment I have:
python = 3.8
tensorflow = 2.4.1 (which I understand is the new way of having the GPU support)
and I installed extra the cudatoolkit==11.0 and cudnn==8.0 for conda as mentioned here.
Procedure followed:
It did not work when I didn't have the conda extra packages, and it still doesn't work even though I installed those extra packages.
After quite a bit of extensive research, it finally works on my computer: the latest versions of the components (i.e. CUDA 11.2, cuDNN 8.1.0) are not tested and not ensure a working result in TF 2.4.1. Therefore, this is my final configuration:
nvidia-drivers-460.39 have CUDA 11.2 drivers. However, you can still install CUDA 11.0 runtime and get it from the official NVIDIA archive for CUDA. Following the installing instructions is still mandatory (i.e. adding the path variables and so on).
cuDNN library needs to be on the version 8.0.4. You can get it also from the official NVIDIA archive for cuDNN
After installing both components on these specific versions, I successfully get:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
Num GPUs Available: 1
with a few debut messages indicating that the GPU libraries were correctly imported.
EDIT:
By the way! For the folks out there who use Pycharm, either you include the environment variables also in PyCharm, or make them system-wide. Otherwise you won't still get your TF to get the GPUs.

Tensorflow uses CPU instead of GPU. How to fix?

I want to run the project using Anaconda, TensorFlow 2.3, Keras 2.4.3 (CNN example). OS Windows 10.
I installed Visual Studio 2019 Community Edition, CUDA 10.1 and cudnn 8.0.5 for CUDA 10.1.
Using Anaconda I created an environment with TensorFlow (tensorflow-gpu didn't help), Keras, matplotlib, scikit-learn. I tried to run it on CPU but it takes a lot of time (20 minutes for just 1 epoch when there are 35).
I need to run it using GPU, but TensorFlow doesn't see my GPU device (GeForce GTX 1060). Can someone help me find the problem? I tried to solve the problem using this guide tensorflow but it didn't help me.
This works 100%, no need to install anything manually (cuda for example)
conda create --name tf_gpu tensorflow-gpu
Ok so I tried to install all the components into new anaconda environment. But instead of "conda install tensorflow-gpu" I decided to write "pip install tensorflow-gpu" and now it works via GPU...
Just a heads up, the Cudnn version you were trying to use was incompatible.
Listing Versions and compatible CUDA+Cudnn
You can go here and then scroll down to the bottom to see what versions of CUDA and Cudnn were used to build TensorFlow.

How to use tensorflow v2 with directml backend

I have a computer with the windows operating system with an amd gpu (rx 5600 xt), and I want to run tensorflow on the gpu.
I found "tensorflow-directml" which allows me to run tensorflow on my gpu, but it uses tensorflow 1.14.0.
Is there another version of "tensorflow-directml" that uses tensorflow v2, or is there another way to run tensorflow in my gpu?
Thanks, and sorry if I wrote something wrong or inaccurate
Microsoft has announced DirectML-plugin for tensorflow 2 in June this year. Check it out at this link: https://learn.microsoft.com/en-us/windows/ai/directml/gpu-tensorflow-plugin. However I believe for your particular GPU model DirectML-plugin may not be compatible as of yet.
Is there another version of "tensorflow-directml" that uses tensorflow
v2
No, According to pypi, latest release (i.e. on Sep 12, 2020) tensorflow-directml 1.15.3.dev200911 is available for public. For more details please refer this.
To run Tensorflow in GPU on windows
For TensorFlow 1.x (i.e. for releases 1.15 and older, CPU and GPU packages are separate)
pip install tensorflow-gpu==1.15 # GPU
For Tensorflow 2.x (i.e. V2) onwards, pip package includes GPU support for CUDA enabled cards
pip install tensorflow
For more information please refer this.

Windows 10 RTX 2070 using keras gpu with anaconda optimize?

I got a new pc recently with a windows 10 and an RTX 2070. I installed anaconda in order to use python and the deep learning frameworks available as keras. I install with anaconda navigator the keras-gpu package. It seems that installing this package will install a "cuda-toolkit 10" and "cudnn" package on anaconda.
I was wondering if my gpu will be used in a optimize way during the training on keras. In fact, in the past, when I installed keras gpu , I had to install microsoft community 2015 and cuda toolkit 9.0/Cudnn on my own in order to make keras gpu working. So, it seems a bit weird that I had no error.
Thank for the help !
It depends on what backends your keras is using.
e.g. If you are using tensorflow, the following statement will give you the answer.
print(tf.test.is_gpu_available())

Installing tensorflow on GPU [duplicate]

This question already has an answer here:
tensorflow on GPU doesn't work
(1 answer)
Closed 2 years ago.
I've installed tensorflow CPU version. I'm using Windows 10 and I have AMD Radeon 8600M as my GPU. Can I install GPU version of tensorflow now? Will there be any problem? If not, where can I get instructions to install GPU version?
Your graphics card do not support CUDA drivers without which you cannot use tensorflow on GPU. Your system will run tensorflow but only on CPU.
However you can use pytorch it is another way to similar task. PyTorch has another version called CLTorch which runs on OpenCL which runs on your graphics card.
Please follow this link for more details.
https://github.com/hughperkins/cltorch
First of all, if you want to see a performance gain, you should have a better GPU, and second of all, Tensorflow uses CUDA, which is only for NVidia GPUs which have CUDA Capability of 3.0 or higher. I recommend you use some cloud service such as AWS or Google Cloud if you really want to do deep learning.
If you want to use an AMD GPU with TensorFlow, you can follow the instructions here.
However:
The GPU you are using is not that powerful and unlikely to give you much of a performance boost
You will need to use Linux with these instructions, although there is a Windows version of ComputeCpp it has not been tested with TensorFlow yet.
It depends on your graphic card, it has to be nvidia, and you have to install cuda version corresponding on your system and SO. Then, you have install cuDNN corresponding on the CUDA version you had installed
Steps:
Install NVIDIA 367 driver
Install CUDA 8.0
Install cuDNN 5.0
Reboot
Install tensorflow from source with bazel using the above configuration

Categories

Resources