tensorflow cuDNN compatibility - python

I'm using keras to make a model.
While compiling, my model doesn't work and an error message pops out:tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
My computer's spec is as follows:
GPU : RTX2070,
Tensorflow version : 1.13.1,
Python version : 3.6.5,
CUDA : 10.0,
cuDNN : 7.4.2
I tried cuDNN 7.5.0 and this link: cannot train Keras convolution network on GPU but changing cuDNN version doesn't work for me.
So, I tried these codes:
>>>import tensorflow as tf
>>>a = tf.constant([1])
>>>b = tf.constnat([2])
>>>sess = tf.Session()
>>>with tf.device('/gpu:0'):
... print(sess.run(a+b))
...
[3]
It works! Does anyone know why I suffer from this problem?

This issue might be of help https://github.com/tensorflow/tensorflow/issues/24828
Try to check which versions of cudnn and tensorflow you have.

I solved this problem by conda install tensorflow-gpu. It automatically installed cuDNN 7.3.1, and the problem solved.

Related

How make tensorflow use GPU?

I'm working with python and I would like to use tensorflow with my GTX2080TI but tensorflow is using only the CPU.
when I ask for device on my computer, it always return an empty list:
In [3]: tf.config.list_physical_devices('GPU')
Out[3]: []
I try this post: How do I use TensorFlow GPU?
but I don't use cuda and the tennsorflow-gpu seems outdated.
I also try this well done tutorial https://www.youtube.com/watch?v=hHWkvEcDBO0 without success.
I install the card drivers, CUDA and cudNN but I still get the same issue.
I also unsinstall tensorflow and keras an install them again without success.
I don't know how can I find what is missing or if I made something wrong.
Python 3.10
Tensoflow version: 2.11.0
Cuda version: 11.2
cudNN: 8.1
This line code tell's me that cuda is not build:
print(tf_build_info.build_info)
OrderedDict([('is_cuda_build', False), ('is_rocm_build', False), ('is_tensorrt_build', False), ('msvcp_dll_names', 'msvcp140.dll,msvcp140_1.dll')])
Starting with TensorFlow 2.11, GPU support was dropped for native windows. You can optionally use Direct ML Plugin
Tutorial here

How to get cudnn to work ? ( failed to initialize )

Got this error on windows 10
UnknownError: Failed to get convolution algorithm. This is probably
because cuDNN failed to initialize, so try looking to see if a warning
log message was printed above. [[{{node conv2d_1/convolution}} =
Conv2D[T=DT_FLOAT,
_class=["loc:#training_1/Adam/gradients/conv2d_1/convolution_grad/Conv2DBackpropFilter"],
data_format="NCHW", dilations=[1, 1, 1, 1], padding="VALID",
strides=[1, 1, 1, 1], use_cudnn_on_gpu=true,
_device="/job:localhost/replica:0/task:0/device:GPU:0"](training_1/Adam/gradients/conv2d_1/convolution_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer,
conv2d_1/kernel/read)]] [[{{node loss_1/mul/_267}} =
_Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0",
send_device="/job:localhost/replica:0/task:0/device:GPU:0",
send_device_incarnation=1, tensor_name="edge_782_loss_1/mul",
tensor_type=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
I have RTx 2070 and :
Python 3.6.5
tf 1.12.0
tf-gpu 1.12.0
cuda 9.0 with all patches.
cudnn 7.3.1
keras 2.2.4
I know nvdia page for cudnn and I read some other answers here. I am interested in small details that are missing. After moving 3 files to 3 directories in the CUDA folder, is there one more step ? Perhaps there is an order in which different parts need to be installed ?
Cuda seems to work fine, python sees it, also matlab sees it.
The error happens while running this code for mnist that I got from the web, which works if I uninstall tensorflow-gpu and use tensorflow on cpu.
An example of great help in the past was that you can't install cuda unless you go custom and uncheck the visual studio option.
Thank you !
Had a similar problem with an RTX 2070 card using CUDA 10...
The solution was to use:
config.gpu_options.allow_growth = True
in tensor flow.
More info on how to use that parameter:
How to prevent tensorflow from allocating the totality of a GPU memory?
RTX cards require CUDA 10 I think.

How to debug Tensorflow segmentation fault in model.fit()?

I am trying to run the Keras MINST example using tensorflow-gpu with a Geforce 2080. My environment is Anaconda on a Linux system.
I am running the unmodified example from a command line python session. I get the following output:
Using TensorFlow backend.
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce RTX 2080, pci bus id: 0000:01:00.0, compute capability: 7.5
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
Train on 60000 samples, validate on 10000 samples
Epoch 1/12
conv2d_1/random_uniform/RandomUniform: (RandomUniform):
/job:localhost/replica:0/task:0/device:GPU:0
conv2d_1/random_uniform/sub: (Sub):
/job:localhost/replica:0/task:0/device:GPU:0
conv2d_1/random_uniform/mul: (Mul):
/job:localhost/replica:0/task:0/device:GPU:0
conv2d_1/random_uniform: (Add):
/job:localhost/replica:0/task:0/device:GPU:0
[...]
The last lines I receive are:
training/Adadelta/Const_31: (Const): /job:localhost/replica:0/task:0/device:GPU:0
training/Adadelta/mul_46/x: (Const): /job:localhost/replica:0/task:0/device:GPU:0
training/Adadelta/mul_47/x: (Const): /job:localhost/replica:0/task:0/device:GPU:0
Segmentation fault (core dumped)
From reading around I assumed this might be a memory problem and added these lines to prevent the GPU from running out of memory:
config = tf.ConfigProto(log_device_placement=True)
config.gpu_options.per_process_gpu_memory_fraction=0.3
K.tensorflow_backend.set_session(tf.Session(config=config))
Checking with the nvidia-smi tool that the GPU is actually used (watch -n1 nvidia-smi)I can confirm from the following output (in this run no per_process_gpu_memory_fraction was set to 1):
I suspect a version incompatibility somewhere between CUDA, Keras and Tensorflow to be the issue, but I don't know, how to debug this.
What debugging measures are available to get to the bottom of this? What other issues might be the reason for this segfault?
EDIT: I experimented further and replacing the model with this code works fine:
model = keras.Sequential([
keras.layers.Flatten(input_shape=input_shape),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
However once I introduce a convolution layer like so
model = keras.Sequential([
keras.layers.Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
# keras.layers.Flatten(input_shape=input_shape),
keras.layers.Flatten(),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
then I again get the aforementioned segfault.
All packets have been installed through Anaconda. I have installed
conda 4.5.11
python 3.6.6
keras-gpu 2.2.4
tensorflow 1.12.0
tensorflow-gpu 1.12.0
cudnn 7.2.1
cudatoolkit 9.2
EDIT: I tried the same code in a non anaconda environment and it works flawlessly. I would prefer to use anaconda though to avoid system updates breaking things.
Build the tensorflow from source(r1.13) .Conv2D segmentation fault fixed.
follow Build from Source
my GPU : RTX 2070
Ubuntu 16.04
Python 3.5.2
Nvidia Driver 410.78
CUDA - 10.0.130
cuDNN-10.0 - 7.4.2.24
TensorRT-5.0.0
Compute Capability: 7.5
Build : tensorflow-1.13.0rc0-cp35-cp35m-linux_x86_64
Download prebuilt from https://github.com/tensorflow/tensorflow/issues/22706
I had the exact same problem on a very similar system as Francois but using a RTX2070 on which I could reliably reproduce the segmentation fault error when using the conv2d function executed on the GPU. My setting:
Ubuntu: 18.04
GPU: RTX 2070
CUDA: 10
cudnn: 7
conda with python 3.6
I finally solved it by building tensorflow from source into a new conda environment. For a fantastic guide see e.g. the following link:
https://gist.github.com/Brainiarc7/6d6c3f23ea057775b72c52817759b25c
This is basically like any other build-tensorflow-from-source guide and consisted in my case of the following steps:
insalling bazel
cloning tensorflow from git and running ./configure
running the appropriate bazel build command (see link for details)
Some minor issues came up during the build, one of which was solved by installing 3 packages manually, using:
pip install keras_applications==1.0.4 --no-deps
pip install keras_preprocessing==1.0.2 --no-deps
pip install h5py==2.8.0
which I found out using this answer here:
Error Compiling Tensorflow From Source - No module named 'keras_applications'
conv2d now works like a charm when using the gpu!
However, since all this took a fairly long time (building from source takes over an hour, not counting the search for the solution on the internet) I recommend to make a backup of the system after you get it working, e.g. using timeshift or any other program that you like.
I had the same Conv2D problem with:
Ubuntu 18.04
Graphic card: GeForce RTX 2080
CUDA: cuda_10.0.130_410
CUDNN: cudnn-10.0-linux-x64-v7.4.2
conda with Python 3.6
Best advice was from this link: https://github.com/tensorflow/tensorflow/issues/24383
So a fix should come with Tensorflow 1.13.
In the meantime, using Tensorflow 1.13 nightly build (Dec 26, 2018) + using tensorflow.keras instead of keras solved the issue.

does tensorflow take all resource from GPU, making other CUDA code slow?

System information
OS Platform and Distribution: Linux Ubuntu 16.04
TensorFlow version: tensorflow-gpu (1.7.0)
Python version: Python 3.5.2
CUDA/cuDNN version: CUDA 9.0 cuDNN 7
Describe the problem
I have a cuda lib build from C++ for post-processing after predict result by tensorflow model.
I use following way to make python able to use cuda code from C++
lib = ctypes.cdll.LoadLibrary(my.so)
result = lib.post_process(tensorflow_result)
If I test the cuda code alone without tensorflow, it work fine. (I save the result from tensorflow then use cv2.imread to feed into my cuda code)
But when tensorflow is used in my project, my cuda code become 10 times slower....
My time log is in cuda .so lib, so it's no way that the gap come from python to .so wrap.
I have try to set the fraction of GPU memory to be allocated in tensorflow by:
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
but useless....
so I wonder does tensorflow take all resource from GPU, making other CUDA code slow ?
the only solution is make my cuda code as a tensorflow OP by register?
Any suggestion? Thanks~~~
----------------------Update----------------------
I have tested what #AnandCU say.
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
but it doesn't make my cuda code speed up like I test it alone without tensorflow.

Can I run Keras model on gpu?

I'm running a Keras model, with a submission deadline of 36 hours, if I train my model on the cpu it will take approx 50 hours, is there a way to run Keras on gpu?
I'm using Tensorflow backend and running it on my Jupyter notebook, without anaconda installed.
Yes you can run keras models on GPU. Few things you will have to check first.
your system has GPU (Nvidia. As AMD doesn't work yet)
You have installed the GPU version of tensorflow
You have installed CUDA installation instructions
Verify that tensorflow is running with GPU check if GPU is working
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
for TF > v2.0
sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))
(Thanks #nbro and #Ferro for pointing this out in the comments)
OR
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
output will be something like this:
[
name: "/cpu:0"device_type: "CPU",
name: "/gpu:0"device_type: "GPU"
]
Once all this is done your model will run on GPU:
To Check if keras(>=2.1.1) is using GPU:
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
All the best.
2.0 Compatible Answer: While above mentioned answer explain in detail on how to use GPU on Keras Model, I want to explain how it can be done for Tensorflow Version 2.0.
To know how many GPUs are available, we can use the below code:
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
To find out which devices your operations and tensors are assigned to,
put tf.debugging.set_log_device_placement(True) as the first statement of your program.
Enabling device placement logging causes any Tensor allocations or operations to be printed. For example, running the below code:
tf.debugging.set_log_device_placement(True)
# Create some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
gives the Output shown below:
Executing op MatMul in device
/job:localhost/replica:0/task:0/device:GPU:0 tf.Tensor( [[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32)
For more information, refer this link
Sure. I suppose that you have already installed TensorFlow for GPU.
You need to add the following block after importing keras. I am working on a machine which have 56 core cpu, and a gpu.
import keras
import tensorflow as tf
config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 56} )
sess = tf.Session(config=config)
keras.backend.set_session(sess)
Of course, this usage enforces my machines maximum limits. You can decrease cpu and gpu consumption values.
Of course. if you are running on Tensorflow or CNTk backends, your code will run on your GPU devices defaultly.But if Theano backends, you can use following
Theano flags:
"THEANO_FLAGS=device=gpu,floatX=float32 python my_keras_script.py"
I'm using Anaconda on Windows 10, with a GTX 1660 Super. I first installed the CUDA environment following this step-by-step. However there is now a keras-gpu metapackage available on Anaconda which apparently doesn't require installing CUDA and cuDNN libraries beforehand (mine were already installed anyway).
This is what worked for me to create a dedicated environment named keras_gpu:
# need to downgrade from tensorflow 2.1 for my particular setup
conda create --name keras_gpu keras-gpu=2.3.1 tensorflow-gpu=2.0
To add on #johncasey 's answer but for TensorFlow 2.0, adding this block works for me:
import tensorflow as tf
from tensorflow.python.keras import backend as K
# adjust values to your needs
config = tf.compat.v1.ConfigProto( device_count = {'GPU': 1 , 'CPU': 8} )
sess = tf.compat.v1.Session(config=config)
K.set_session(sess)
This post solved the set_session error I got: you need to use the keras backend from the tensorflow path instead of keras itself.
Using Tensorflow 2.5, building on #MonkeyBack's answer:
conda create --name keras_gpu keras-gpu tensorflow-gpu
# should show GPU is available
python -c "import tensorflow as tf;print('GPUs Available:', tf.config.list_physical_devices('GPU'))"
See if your script is running GPU in Task manager. If not, suspect your CUDA version is right one for the tensorflow version you are using, as the other answers suggested already.
Additionally, a proper CUDA DNN library for the CUDA version is required to run GPU with tensorflow. Download/extract it from here and put the DLL (e.g., cudnn64_7.dll) into CUDA bin folder (e.g., C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin).

Categories

Resources